While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: August 15, 2014.

1

NASA Astrophysics Data System (ADS)

Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

2013-12-01

2

SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

NASA Astrophysics Data System (ADS)

Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc

2014-04-01

3

The shell effects in the scission-point configuration of fissioning nuclei

NASA Astrophysics Data System (ADS)

In the present work the formal definition of the scission point—the maximal elongation at which the nucleus splits into two fragments—is given. The shape and the deformation energy at the scission point are calculated using the macroscopic–microscopic model. Three minima in the scission point deformation energy are found corresponding to the ‘standard’, ‘supershort’ and ‘superlong’ fission modes. The contribution of each fission mode to the mass distribution of the fission fragments and total kinetic energy is discussed and compared with the experimental results. In the example of the fission of U-235 by thermal neutrons it is shown that the present approach reproduces correctly the position of the peaks of the mass distribution of the fission fragments, the value and the fine details of the total kinetic energy distribution and the magnitude of the total excitation energy of the fission fragments.

Ivanyuk, F. A.

2014-05-01

4

Advanced fission models in nuclear data calculations

NASA Astrophysics Data System (ADS)

Transition states at the saddle points and superdeformed or hyperdeformed states in the secondary wells of multiple-humped potential barriers play an important role in low-energy fission processes. In the present work discrete collective spectra at large nuclear deformations are predicted by means of the dinuclear model and combined with the optical model for fission of the Empire-3 system of codes. The formalism is applied to the 233U(n, f) reaction and the computed cross section compared with recent experimental results of the n_TOF Collaboration. Angular anisotopies of fission fragments are evaluated with an improved version of the scission-point model.

Shneidman, T. M.; Andreev, A. V.; Sin, M.; Massimi, C.; Vannini, G.; Ventura, A.

2012-05-01

5

Statistical prescission point model of fission fragment angular distributions

NASA Astrophysics Data System (ADS)

In light of recent developments in fission studies such as slow saddle to scission motion and spin equilibration near the scission point, the theory of fission fragment angular distribution is examined and a new statistical prescission point model is developed. The conditional equilibrium of the collective angular bearing modes at the prescission point, which is guided mainly by their relaxation times and population probabilities, is taken into account in the present model. The present model gives a consistent description of the fragment angular and spin distributions for a wide variety of heavy and light ion induced fission reactions.

John, Bency; Kataria, S. K.

1998-03-01

6

We determine criteria for modelling plan-based “but” in task-oriented dialogue (TOD), following work by Lagerwerf [5] and focusing on cases in which it signals denial of expectation (DofE) and concession, to which end we propose a novel treatment of concession in TOD. We present initial considerations for an algorithm to address\\u000a plan-based “but” in an Information State (IS) model of

Kavita E. Thomas

2003-01-01

7

NASA Technical Reports Server (NTRS)

A prediction of the future population of satellites, satellite fragments, and assorted spacecraft debris in Earth orbit can be reliably made only after three conditions are satisfied: (1) the size and spatial distributions of these Earth-orbiting objects are established at some present-day time; (2) the processes of orbital evolution, explosions, hypervelocity impact fragmentation, and atmospheric drag are understood; and (3) a reasonable traffic model for the future launch rate of Earth-orbiting objects is assumed. The theoretician will then take these three quantities as input data and will carry through the necessary mathematica and numerical analyses to project the present-day orbital population into the future.

Zook, H. A.

1985-01-01

8

Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

2005-09-01

9

FISSION OF {sup 238}U INDUCED BY INELASTIC SCATTERING OF 120 MeV {alpha}-PARTICLES

The fission decay of {sup 238}U has been measured as function of excitation energy in inelastic scattering of 120 MeV {alpha}-particles. Total kinetic energies and masses of fission fragments were measured by the double energy method. It is observed that the total kinetic energy E{sub K} decreases and that the valley in the mass distribution is reduced when the excitation energy of the system is increased. No indication of anomalous total kinetic energy release in the region of the giant quadrupole resonance has been found. A qualitative interpretation of the data is given on the basis of a static scission point model.

Back, B.B.; Shotter, A.C.; Symons, T.J.M.; Bice, A.; Gelbke, C.K.; Awes, T.C.; Scott, D.K.

1980-09-01

10

NASA Astrophysics Data System (ADS)

Energy correlation measurements were performed for the photofission of 235U with 12-, 15-, 20-, 30-, and 70-MeV bremsstrahlung. Overall fragment mass and kinetic energy distributions are deduced. The behavior of the total fragment kinetic energy as a function of the fragment mass and excitation energy of the compound nucleus is studied. The results are interpreted in terms of the scission-point model of Wilkins et al. NUCLEAR REACTIONS, FISSION 235U(?,f), E?=12, 15, 20, 30, and 70 MeV; measured fragment energies E1, E2; deduced N(?,EK)

Jacobs, E.; de Clercq, A.; Thierens, H.; de Frenne, D.; D'Hondt, P.; de Gelder, P.; Deruytter, A. J.

1981-10-01

11

Models are tools; they need to fit both the hand and the task. Presence or absence of a feature such as a pacemaker or a cascade is not in itself good. Or bad. Criteria for model evaluation involve benefit-cost ratios, with the numerator a function of the range of phenomena explained, goodness of fit, consistency with other nearby models, and intangibles such as beauty. The denominator is a function of complexity, the number of phenomena that must be ignored, and the effort necessary to incorporate the model into one's parlance. Neither part of the ratio can yet be evaluated for MTS, whose authors provide some cogent challenges to SET.

Killeen, P R

1999-01-01

12

Effects of large angular momenta on the fission properties of Pt isotopes

NASA Astrophysics Data System (ADS)

The effects of large angular momenta on the fission properties of Pt isotopes are studied. Fissioning Pt systems were produced in fusion reactions of 16O + 170Yb and 32S + 144,150,152,154Sm in the energy range of 0.8-2.0 times the fusion barrier. The sub-barrier fusion-fission cross sections show a strong dependence on the target deformation. A pronounced increase in a simple model calculation in the total kinetic energy release with bombarding energy is accounted for by the increasing contribution of centrifugal energy. The widths of the mass and kinetic energy distributions increase rapidly with the angular momentum of the fissioning system. This effect is not accounted for by simple scission point model calculations. [NUCLEAR REACTIONS, FISSION 16O+170Yb, E=90-148 MeV, 32S + 144,150,152,154Sm, E=180-230 MeV, measured fission cross sections, ?f(E), total kinetic energy and mass distributions as a function of bombarding energy. Analysis in terms of the scission point model of fission.

Glagola, B. G.; Back, B. B.; Betts, R. R.

1984-02-01

13

Role of deformed shell effects on the mass asymmetry in nuclear fission of mercury isotopes

NASA Astrophysics Data System (ADS)

Until now, the mass asymmetry in the nuclear fission process has been understood in terms of the strong influence of the nuclear structure of the nascent fragments. Recently, a surprising asymmetric fission has been discovered in the light mercury region and has been interpreted as the result of the influence of the nuclear structure of the parent nucleus, totally discarding the influence of the fragments' structure. To assess the role of the fragment shell effects in the mass asymmetry in this particular region, a scission-point model, based on a full energy balance between the two nascent fragments, has been developed using one of the best theoretical descriptions of microscopic nuclear structure. As for actinides, this approach shows that the asymmetric splitting of the 180Hg nucleus and the symmetric one of 198Hg can be understood on the basis of only the microscopic nuclear structure of the fragments at scission.

Panebianco, Stefano; Sida, Jean-Luc; Goutte, Héloise; Lemaître, Jean-François; Dubray, Noël; Hilaire, Stéphane

2012-12-01

14

NASA Astrophysics Data System (ADS)

Post- and preneutron-emission mass and kinetic energy distributions of the fragments emitted in the photofission of 232Th with 6.44, 7.33, 8.35, 9.31, 11.13 and 13.15 MeV have been studied. Energy correlation and ?-spectrometric measurements were performed. Sb, Ru and Cd were separated chemically to determine postneutron yields in the symmetric mass region. The 232Th system predominantly splits in an asymmetric way with a maximum yield for heavy fragments in the region of mass 140. An enhanced yield around heavy mass 134 is observed, becoming of increasing importance with increasing compound nucleus excitation energy. For 6.44 and 7.35 MeV bremsstrahlung induced fission no symmetric component in the mass distribution could be observed. For the higher endpoint energies symmetric fission becomes more and more evident. From the symmetric fission yields at different excitation energies, using barrier penetration calculations, the height of the symmetric fission barrier is estimated to be of the order of 7.5 to 7.7 MeV. The total fragment kinetic energy shows a minimum for symmetric splits and a maximum for splits with heavy mass in the vicinity of mass 132. It increases with increasing excitation energy of the 232Th compound nucleus. This effect is especially pronounced in the energy region just above the barrier. It is observed for all masses, but mass splits with heavy mass in the vicinity of mass 132 show the strongest effects. The fragment mass distributions for 232Th(?, f) show a clear difference when compared with those for ?-particle accompanied fission of 235U. Our results are interpreted in the framework of the Brosa fission channels model and in the scission point model. They also provide information concerning the dissipation of collective energy into the intrinsic degrees of freedom during the transition from saddle to scission point.

Piessens, M.; Jacobs, E.; Pommé, S.; De Frenne, D.

1993-05-01

15

Fragment mass and kinetic energy distributions for 242Pu(sf), 241Pu(nth,f), and 242Pu(?,f)

NASA Astrophysics Data System (ADS)

Energy correlation measurements were performed for the spontaneous fission of 242Pu, the thermal-neutron-induced fission of 241Pu, and the photofission of 242Pu with 12-, 15-, 20-, and 30- MeV bremsstrahlung. The photofission cross section for 242Pu was determined up to 30 MeV. For 242Pu(sf) the overall kinetic energy distribution is strongly asymmetric and the overall mass distribution has a very high peak yield (9%). Important deviations of the average total kinetic energy release

Thierens, H.; Jacobs, E.; D'Hondt, P.; de Clercq, A.; Piessens, M.; de Frenne, D.

1984-02-01

16

Models, Fiction, and Fictional Models

NASA Astrophysics Data System (ADS)

The following sections are included: * Introduction * Why Most Models in Science Are Not Fictional * Typically Fictional Models in Science * Modeling the Unobservable * Fictional Models for the Unobservable? * References

Liu, Chuang

2014-03-01

17

NASA Technical Reports Server (NTRS)

The types of models used in assessment of possible chemical perturbations to the stratosphere are reviewed. The statue of one and two dimensional models are discussed. The problem of model validation is covered before the status of photochemical modeling efforts is discussed. A hierarchy of tests for photochemical models is presented.

Pyle, J. A.; Butler, D. M.; Cariolle, D.; Garcia, R. R.; Grose, W. L.; Guthrie, P. D.; Ko, M.; Owens, A. J.; Plumb, R. A.; Prather, M. J.

1985-01-01

18

NSDL National Science Digital Library

The Fair model web site includes a freely available United States macroeconomic econometric model and a multicounty econometric model. The models run on the Windows OS. Instructors can use the models to teach forecasting, run policy experiments, and evaluate historical episodes of macroeconomic behavior. The web site includes extensive documentation for both models. The simulation is for upper-division economics courses in macroeconomics or econometrics. The principle developer is Ray Fair at Yale University.

Blecha, Betty

19

ERIC Educational Resources Information Center

Suggests building models as a way to reinforce and enhance related subjects such as architectural drafting, structural carpentry, etc., and discusses time, materials, scales, tools or equipment needed, how to achieve realistic special effects, and the types of projects that can be built (model of complete building, a panoramic model, and model…

Levenson, Harold E.; Hurni, Andre

1978-01-01

20

NSDL National Science Digital Library

In this activity, PVC pipe, plastic water bottles and vinyl tubing are used to make a simple working toilet model. The model shows the role of a siphon in the flushing of a toilet. Educators can pre-assemble this model and use it for demonstration purposes or engage learners in the model building process.

Rathjen, Don

2005-01-01

21

NSDL National Science Digital Library

Chapter 1 defines and discusses models in a broad, and perhaps unusual, way. In particular, the chapter stresses the framework of personal models that underlie science and learning across fields. Subsequent chapters will deal more with particular kinds of expressed models that are important in science and science teaching: physical models, analog models and plans, mathematical models, and computer simulations. Throughout, the book examines how all models are important to science, how they are used, and how to use them effectively. They can and should be used not only to teach science, but also to teach students something about the process of learning and about the nature of knowledge itself.

Ireton, Shirley W.; Gilbert, Steven W.

2003-01-01

22

Radom matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described.

Yost, S.A.

1991-05-01

23

Models-3 is a third generation air quality modeling system that contains a variety of tools to perform research and analysis of critical environmental questions and problems. These tools provide regulatory analysts and scientists with quicker results, greater scientific accuracy ...

24

NASA Technical Reports Server (NTRS)

Recent developments at several levels of statistical turbulence modeling applicable to aerodynamics are briefly surveyed. Emphasis is on examples of model improvements for transonic, two-dimensional flows. Experience with the development of these improved models is cited to suggest methods of accelerating the modeling process necessary to keep abreast of the rapid movement of computational fluid dynamics into the computation of complex three-dimensional flows.

Rubesin, Morris W.

1987-01-01

25

NSDL National Science Digital Library

The module provides background information for the Characteristics of Operational NWP Models module (also in the NWP PDS), which contains current information about the characteristics and architecture of commonly used operational models, their operationally significant strengths and weaknesses, and model assessment tools. The subject matter expert for this module is Dr. Ralph Petersen of the National Centers for Environmental Prediction, Environmental Modeling Center (NCEP/EMC).

Spangler, Tim

1999-12-10

26

NSDL National Science Digital Library

Each model organism has its own advantages and disadvantages. Choosing an appropriate model depends on the question being asked. Many laboratories find it useful to perform parallel experiments in two or more model systems to understand different aspects of a biochemical process. This animation from Cold Spring Harbor Laboratory's Dolan DNA Learning Center presents Model Organisms through a series of illustrations of the processes involved.

2012-11-19

27

NSDL National Science Digital Library

The EJS Energizer model explores the relationship between kinetic, potential, and total energy. Users create a potential energy curve and observe the resulting motion. The Energizer model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file will run the program if Java is installed.

Gallis, Michael R.

2008-10-27

28

NSDL National Science Digital Library

At the end of the first chapter, you realized, hopefully, that the model of an atom that we have so far (Rutherford's model of a concentrated positive nucleus with negative charges around it) doesn't go very far in helping us explain observations. So, why not try to make that model better? This chapter shows you how it's done.

Robertson, William C.

2007-01-01

29

NSDL National Science Digital Library

This worksheet implements an SIR (Susceptible/ Infected/ Resistant) model of epidemiology for vector-borne diseases. Up to three microbial strains with different virulence and transmission parameters can be modeled and the results graphed. Originally designed to explore coevolution of myxoma and rabbits, the model is easily generalized to other systems.

Tony Weisstein (Truman State University;Biology)

2007-06-20

30

ERIC Educational Resources Information Center

Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

James, W. G. G.

1970-01-01

31

NASA Technical Reports Server (NTRS)

A survey designed to provide an introduction to the subject of turbulence modeling, and to explain the need for such models is given. The subject is developed along chronological lines since this provides a logical development plan and also because it then moves from relatively simple phenomenological models through more complicated procedures and ultimately to the subject of large-eddy simulation.

Murphy, J. D.

1984-01-01

32

Model Experiments and Model Descriptions

NASA Technical Reports Server (NTRS)

The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

1999-01-01

33

NSDL National Science Digital Library

While models and analogies are integral to both the learning and practice of science, their use is complex and potentially troublesome. Misconceptions can arise when parts of a model are misleading, missing, or misapplied. Students begin to look critically at models as they investigate a question of personal interest and develop related lessons for use in a local elementary school. This article suggests techniques you can use to analyze models and describes preservice teachers' experiences as they critically examined popular models used in many elementary classrooms.

Frazier, Richard

2003-01-01

34

Functions and Models: Mathematical Models

NSDL National Science Digital Library

Describe the process of mathematical modeling;Name and describe some methods of modeling;Classify a symbolically represented function as one of the elementary algebraic or transcendental functions;Appraise the suitability of different models for interpreting a given set of data.

Freeze, Michael

2003-01-22

35

The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future.

H. Yang

1999-11-04

36

NASA Technical Reports Server (NTRS)

Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

Druyan, Leonard M.

2012-01-01

37

NSDL National Science Digital Library

Students explore the impact of changing river volumes and different floodplain terrain in experimental trials with table top-sized riverbed models. The models are made using modeling clay in aluminum baking pans placed on a slight incline. Water added "upstream" at different flow rates and to different riverbed configurations simulates different potential flood conditions. Students study flood dynamics as they modify the riverbed with blockages or levees to simulate real-world scenarios.

Integrated Teaching And Learning Program

38

NSDL National Science Digital Library

SCARP is the first in a sequence of spreadsheet modeling exercises (SCARP2, LONGPRO, and GLACPRO). In this exercise, students use a simple arithmetic model (a running mean) to simulate the evolution of a scarp (escarpment) across time. Although the output closely resembles an evolving scarp, no real variables are included in the model. The purpose of the exercise, in addition to the simulation, is to develop basic skills in spreadsheeting and especially in graphical display.

Locke, Bill

39

Model checking is an automatic technique for verifying finite-state reactive systems, such as sequential circuit designs and\\u000a communication protocols. Specifications are expressed in temporal logic, and the reactive system is modeled as a statetransition\\u000a graph. An efficient search procedure is used to determine whether or not the state-transition graph satisfies the specifications.\\u000a \\u000a We describe the basic model checking algorithm and

Edmund M. Clarke

1997-01-01

40

NSDL National Science Digital Library

In this lesson, students will explore volcanoes by constructing models and reflect upon their learning through drawing sketches of their models. Once they have finished making their models, they will experiment with making their volcanoes erupt. They will observe how eruption changes the original form of their volcano models. In this way, students see first hand how this type of phenomena creates physical change. While students at this level may struggle to understand larger and more abstract geographical concepts, they will work directly with material that will help them build a foundation for understanding concepts of phenomena that sculpt the earth.

41

The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To further satisfy KTI agreements RDTME 3.01 and 3.14 (Reamer and Williams 2001a) by providing the source documentation referred to in the KTI Letter Report, ''Effect of Forced Ventilation on Thermal-Hydrologic Conditions in the Engineered Barrier System and Near Field Environment'' (Williams 2002). Specifically to provide the results of the MULTIFLUX model which simulates the coupled processes of heat and mass transfer in and around waste emplacement drifts during periods of forced ventilation. This portion of the model report is presented as an Alternative Conceptual Model with a numerical application, and also provides corroborative results used for model validation purposes (Section 6.3 and 6.4).

V. Chipman

2002-10-05

42

Model Selection for Geostatistical Models

We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

2006-02-01

43

NSDL National Science Digital Library

In this activity, learners create models of bugs. Learners use household materials like plastic cups and straws to create models of bugs like centipedes and spiders. The activity is covered in the first 5 pages of the document. There are also a number of related activities that introduce learners to the world of invertebrates.

Plymouth, The U.; Council, The B.

2012-06-26

44

ERIC Educational Resources Information Center

Argues that the organization of cognitive structures for technical domains can be visualized as a network of connected thinkable models. Describes a taxonomy of models that has been developed and discusses the issue of how representations relate to human modes of perception and action. Contains 25 references. (DDR)

Lawler, Robert W.

1996-01-01

45

The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.

Veronica J. Rutledge

2013-01-01

46

NASA Technical Reports Server (NTRS)

Fabry-Perot etalons are wavelength selecting optical devices with widespread applications in lasers, radiometers, and other electro-optical devices. The Etalon Model program generates a stand-alone model of an etalon. It is designed to perform calculations over many wavelengths. The model may be used to determine the sensitivity of the filter to temperature changes, thickness tolerances, or alignment angles of an etalon design. A pre and post-processor are included to facilitate case studies. The Etalon Model program calculates several etalon performance parameters using closed form equations. It calculates the transmission as a function of wavelength, and uses the transmission data to determine additional etalon performance parameters. The calculations are done for two etalons: The first etalon which is the collimated, untilted etalon, is placed in a collimated optical beam with the beam at normal incidence; the second etalon which is the uncollimated, tilted etalon is placed in a converging beam at non-normal incidence. The program is intended to be used for comparison of measurements of an etalon in a collimated beam with the performance of the etalon in an alternative system. The program will calculate the parameters for multiple passes through the etalon. The Etalon Model program was developed on an IBM PS/2 Model 80-071 computer using the Microsoft version 4.01 FORTRAN compiler. It has been implemented under DOS 3.21 and has a memory requirement of 167K. The Etalon Model program was developed in 1988.

Cross, P. L.

1994-01-01

47

NASA Astrophysics Data System (ADS)

Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

48

NSDL National Science Digital Library

A human is a complicated organism, and it is considered unethical to do many kinds of experiments on human subjects. For these reasons, biologists often use simpler 'model' organisms that are easy to keep and manipulate in the laboratory. Despite obvious differences, model organisms share with humans many key biochemical and physiological functions that have been conserved (maintained) by evolution. Each of the following model organisms has its advantages and disadvantages in different research applications. This tool allows you to examine the similarities between different systems by comparing the proteins they share and the proportion of DNA they have in common. Choose a gene from the drop-down menu and select the species you want to compare. Rolling over the images will give you a more detailed description of each model. Clicking on a geneÃÂ¢ÃÂÃÂs name will take you to the National Center for Biological Information, where you can explore the latest relevant scientific literature.

2009-04-14

49

A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

Daniel, David J [Los Alamos National Laboratory; Mc Pherson, Allen [Los Alamos National Laboratory; Thorp, John R [Los Alamos National Laboratory; Barrett, Richard [SNL; Clay, Robert [SNL; De Supinski, Bronis [LLNL; Dube, Evi [LLNL; Heroux, Mike [SNL; Janssen, Curtis [SNL; Langer, Steve [LLNL; Laros, Jim [SNL

2011-01-14

50

Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...

51

NASA Technical Reports Server (NTRS)

A summary report is given of the activities of the device modeling workshop which was held as a part of the Space Photovoltaic Research and Technology Conference at the Lewis Research Center, October 7 to 9, 1986. The purpose of this workshop was to access the status of solar cell device modeling to see if it is meeting present and future needs of the photovoltaic community.

Schwartz, Richard

1987-01-01

52

Objective: The authors have developed a clinical model of limited cone-beam X-ray CT for dental use and started to use the model in clinical practice. It is called “3DX multi image micro CT” (3DX, J. Morita, Kyoto, Japan). Presented here is a report about the result. Method: We made a design of limited cone-beam X-ray CT so that it could

Yoshinori Arai; Kazuya Honda; Kazuo Iwai; Koji Shinoda

2001-01-01

53

NSDL National Science Digital Library

Created and maintained by Dr. Dave Woodcock of the Chemistry Department at Okanagan University College in British Columbia, Canada, this site features models of over 1,100 molecules in .pdb, or Chemscape Chime, format (link to free download provided). Users may search the molecular database using an internal search engine or browse by category or alphabetically. Index page entries include the molecule's name, formula, molar mass, and comments. The site also features more detailed models of selected molecular fragments.

Woodcock, Dave.

1998-01-01

54

.38>. Original modelFFD-example6FFD-example. ExampleFFD-example7Deformation of articulatedstructures. Deform FFD patches attached to the skeletonDeformation of articulatedstructures. Define ad-hoc laws. EX: muscle8Extended FFD. Noncubic patches. Cubic parametrizationExtendedFFDexample9Animating deformationFactor curves. Interpolating a transform over space andtime. Ex: ) ( ) (0t f w f t w q q =10Human face animation. Geometric model. Skin...

Demetri Terzopoulos; Kurt W. Fleischer

1988-01-01

55

NASA Astrophysics Data System (ADS)

Calculation procedures, used in the design of ventilating systems, which are especially suited for displacement ventilation in addition to linking it to mixing ventilation, are addressed. The two zone flow model is considered and the steady state and transient solutions are addressed. Different methods of supplying air are discussed, and different types of air flow are considered: piston flow, plane flow and radial flow. An evaluation model for ventilation systems is presented.

Skaaret, Eimund

56

NASA Astrophysics Data System (ADS)

In order to describe heavy-ion fusion reactions around the Coulomb barrier with an actinide target nucleus, we propose a model which combines the coupled-channels approach and a fluctuation-dissipation model for dynamical calculations. This model takes into account couplings to the collective states of the interacting nuclei in the penetration of the Coulomb barrier and the subsequent dynamical evolution of a nuclear shape from the contact configuration. In the fluctuation-dissipation model with a Langevin equation, the effect of nuclear orientation at the initial impact on the prolately deformed target nucleus is considered. Fusion-fission, quasifission, and deep quasifission are separated as different Langevin trajectories on the potential energy surface. Using this model, we analyze the experimental data for the mass distribution of fission fragments (MDFF) in the reactions of 34,36S + 238U and 30Si + 238U at several incident energies around the Coulomb barrier. We find that the time scale in the quasifission as well as the deformation of fission fragments at the scission point are different between the 30Si + 238U and 36S + 238U systems, causing different mass asymmetries of the quasifission.

Aritomo, Y.; Hagino, K.; Nishio, K.; Chiba, S.

2012-04-01

57

Induced nuclear fission viewed as a diffusion process: Transients

NASA Astrophysics Data System (ADS)

Induced nuclear fission is viewed as a diffusion process of the fission degree of freedom over the fission barrier. We describe this process in terms of a Fokker-Planck equation which contains the fission variable and its canonically conjugate momentum. We solve this equation numerically for several energies (temperatures) of the fissioning nucleus neglecting changes of the fission barrier due to the temperature dependence of nuclear shell effects. We pay particular attention to the time ? needed for the system to build up the quasistationary probability flow over the fission barrier. The rate of the latter is approximated in terms of the Bohr-Wheeler formula or Kramer's transition state expression; the precise value of the quasistationary current depends on the nuclear friction constant ?. Our results for ? are consistent with those obtained earlier in the framework of a simplified model: As long as ?<=?0, the time ? is proportional to ?-1. This relationship exhibits the fact that with increasing friction ?, the diffusion process is accelerated, so that it takes the system increasingly less time to attain the quasistationary distribution. The constant ?0 is roughly given by 2?1, where ?1 is the frequency of a harmonic oscillator potential which osculates the potential at the minimum corresponding to the initial configuration of the fissioning nucleus. The condition ?<=?0 is roughly equivalent with the motion in that minimum being underdamped. The converse relationship-? increases with ?-is found for ?>?0. We ascribe this to the fact that now the fission variable executes an overdamped motion. Generalizing Kramers's original derivation, we obtain an analytical expression for the time dependence of the probability current over the fission barrier. For ?<~?0, this expression agrees well with our numerical results. We use it to calculate the energy dependence of the fission probability Pf and find that Pf grows much less rapidly with increasing excitation energy than would be predicted by the Bohr-Wheeler formula. This is in qualitative agreement with recent experimental findings and suggests that the energy dependence of Pf deserves further investigation and can be used to determine ? experimentally. Our analysis does not yet include the additional time delay incurred by the system on its way from the saddle to the scission point: Clearly the time needed to establish the quasistationary situation at the scission point will be larger than ?. This would probably lead to additional modifications of the energy dependence of Pf. NUCLEAR REACTIONS, FISSION Diffusion over a potential barrier; transients; deviation from Bohr-Wheeler formula.

Grangé, P.; Jun-Qing, Li; Weidenmüller, H. A.

1983-05-01

58

NSDL National Science Digital Library

The EJS Gnomon model simulates the shadow cast by a gnomon (the part of a sundial that casts the shadow) over the course of a day for any day of the year and any latitude on Earth. The program gives you the option to use mean Sun (which moves relative to the stars at a constant rate throughout the year) or true Sun (which varies its apparent speed relative to the background stars). The default is to use true Sun. The program also shows the observer's horizon plane on the spherical Earth, as well as the ecliptic and the apparent path of Sun. The Earth View can be set to let Earth rotate or remain fixed EJS Gnomon model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_astronomy_Gnomon.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for astronomy are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Timberlake, Todd

2009-08-19

59

NSDL National Science Digital Library

The Ejs Beats model displays the result of adding two waves with different frequencies. The simulation displays the superposition of the two waves as well as a phasor diagram that shows how the waves add up at one point in space. The ratio of the wave amplitudes, the ratio of the frequencies, and the phase shift between the two waves can be changed via textboxes. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item. Ejs Beats model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_ehu_oscillations_beats.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for classical mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Aguirregabiria, Juan

2008-10-13

60

Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) interaction'' of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

Curtis, S.B.

1990-09-01

61

Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) ``interaction`` of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

Curtis, S.B.

1990-09-01

62

NSDL National Science Digital Library

The Micrometer Model shows the principle of operation and the physical parts of a real micrometer. Micrometers use a screw to amplify distances that are too small to measure directly into large rotations of the screw that are big enough to read from a scale. The accuracy of a micrometer derives from the accuracy of the thread that is at its heart. The basic operating principle of a micrometer is that the rotation of an accurately made screw can be directly and precisely correlated to a certain amount of axial movement (and vice-versa), through the constant known as the screw's lead. The Micrometer model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_ntnu_Micrometer.jar file to run the program (Java must be installed).

Hwang, Fu-Kwun

2009-09-11

63

Background This work focuses on the computational modelling of osteomyelitis, a bone pathology caused by bacteria infection (mostly Staphylococcus aureus). The infection alters the RANK/RANKL/OPG signalling dynamics that regulates osteoblasts and osteoclasts behaviour in bone remodelling, i.e. the resorption and mineralization activity. The infection rapidly leads to severe bone loss, necrosis of the affected portion, and it may even spread to other parts of the body. On the other hand, osteoporosis is not a bacterial infection but similarly is a defective bone pathology arising due to imbalances in the RANK/RANKL/OPG molecular pathway, and due to the progressive weakening of bone structure. Results Since both osteoporosis and osteomyelitis cause loss of bone mass, we focused on comparing the dynamics of these diseases by means of computational models. Firstly, we performed meta-analysis on a gene expression data of normal, osteoporotic and osteomyelitis bone conditions. We mainly focused on RANKL/OPG signalling, the TNF and TNF receptor superfamilies and the NF-kB pathway. Using information from the gene expression data we estimated parameters for a novel model of osteoporosis and of osteomyelitis. Our models could be seen as a hybrid ODE and probabilistic verification modelling framework which aims at investigating the dynamics of the effects of the infection in bone remodelling. Finally we discuss different diagnostic estimators defined by formal verification techniques, in order to assess different bone pathologies (osteopenia, osteoporosis and osteomyelitis) in an effective way. Conclusions We present a modeling framework able to reproduce aspects of the different bone remodeling defective dynamics of osteomyelitis and osteoporosis. We report that the verification-based estimators are meaningful in the light of a feed forward between computational medicine and clinical bioinformatics.

2012-01-01

64

NASA Astrophysics Data System (ADS)

Summer upwelling on the continental shelf north of Cape Canaveral, Florida, has been previously observed to result from wind forcing. A two-layer, finite element model reproduces reasonably well the characteristics of the wind-driven upwelling in respect to location and magnitude. Model investigation also shows that upwelling results from offshore current forcing which is imposed through an along-shelf sea level slope. This sea level slope, which has been found to be of the order of -10 -7, represents a mean Gulf Stream effect. The results suggest that the strongest upwelling events near Cape Canaveral occur when the wind and Gulf Stream forcings act together.

Lorenzzetti, Jo?o; Wang, John D.; Lee, Thomas N.

65

National Technical Information Service (NTIS)

A summary report is given of the activities of the device modeling workshop which was held as a part of the Space Photovoltaic Research and Technology Conference at the Lewis Research Center, October 7 to 9, 1986. The purpose of this workshop was to acces...

R. Schwartz

1987-01-01

66

ERIC Educational Resources Information Center

There are dozens of books and hundreds of resources that address the issue of character development in students: how to raise them to be good people, how to teach them to be good citizens, how to help them to make good decisions. Little is written, however, about the character development of principals and school leaders, whose behavior is a model…

Holloway, John

2006-01-01

67

The evolution of Bayesian approaches for model uncertainty over the past decade has been remarkable. Catalyzed by advances in methods and technology for posterior computation, the scope of these methods has widened substantially. Major thrusts of these developments have included new methods for semiautomatic prior specification and posterior exploration. To illustrate key aspects of this evolution, the highlights of some

Merlise Clyde; Edward I. George

2004-01-01

68

Although air quality models have been applied historically to address issues specific to ambient air quality standards (i.e., one criteria pollutant at a time) or welfare (e.g.. acid deposition or visibility impairment). they are inherently multipollutant based. Therefore. in pri...

69

ERIC Educational Resources Information Center

As teachers learn new pedagogical strategies, they crave explicit demonstrations that show them how the new strategies will work with their students in their classrooms. Successful instructional coaches, therefore, understand the importance of modeling lessons to help teachers develop a vision of effective instruction. The author, an experienced…

Casey, Katherine

2011-01-01

70

NSDL National Science Digital Library

The Daisyworld model created by Andrew Watson and James Lovelock (1983, Tellus, v. 35B, p. 284-289) is a wonderful example of a self-regulating system incorporating positive and negative feedbacks. The model consists of a planet on which black and white daisies are growing. The growth of these daisies is governed by a parabolic shaped growth function regulated by planetary temperature and is set to zero for temperatures less than 5 ÂºC or greater than 40 ÂºC and optimized at 22.5 ÂºC. The model explores the effect of a steadily increasing solar luminosity on the growth of daisies and the resulting planetary temperature. The growth function for the daisies allows them to modulate the planet's temperature for many years, warming it early on as black daisies grow, and cooling it later as white daisies grow. Eventually, the solar luminosity increases beyond the daisies' capability to modulate the temperature and they die out, leading to a rapid rise in the planetary temperature. Students read Watson and Lovelock's original paper, and then use STELLA to create their own Daisyworld model with which they can experiment. Experiments include changing the albedos of the daisies, changing their death rates, and changing the rate at which energy is conducted from one part of the planet to another. In all cases, students keep track of daisy populations and of planetary temperature over time.

Menking, Kirsten

71

NSDL National Science Digital Library

No glue is needed for learners of any age to become marshmallow architects or engineers. Using marshmallows and water (and maybe edible decorations like peanut butter, pretzels, gumdrops, etc.), learners wet a few marshamallows at a time and stick them together bit by bit to construct whatever models they want.

Science, Lawrence H.

2010-01-01

72

The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).

A. Alsaed

2004-09-14

73

ATMOSPHERIC MODELING: MODEL AND ACCURACY

The development of models to assess the emission control requirements of primary precursor pollutants in the production of photochemical oxidants has been underway for approximately 20 years. Over the period there has been a considerable increase in our understanding of the basic...

74

NSDL National Science Digital Library

As stated in "About This Book," the author isn't going to take the usual approach to the subject of chemistry. Because virtually all explanations of chemical reactions are based on our current model of atoms and molecules, the first thing to do here is to help you understand why we believe that atoms and molecules look and act the way they do. That's not a trivial issue, because despite the impression you might have gotten from textbooks, no one has ever seen an atom in the sense that you can see this page in front of you. What we have are observations and experiments that lead us to formulate models of atoms. This free selection includes the Table of Contents, Preface, About This Book section, a Safety Note, and the Glossary.

Robertson, William C.

2007-01-01

75

NASA Technical Reports Server (NTRS)

The molecule modeling method known as Multibody Order (N) Dynamics, or MBO(N)D, was developed by Moldyn, Inc. at Goddard Space Flight Center through funding provided by the SBIR program. The software can model the dynamics of molecules through technology which stimulates low-frequency molecular motions and properties, such as movements among a molecule's constituent parts. With MBO(N)D, a molecule is substructured into a set of interconnected rigid and flexible bodies. These bodies replace the computation burden of mapping individual atoms. Moldyn's technology cuts computation time while increasing accuracy. The MBO(N)D technology is available as Insight II 97.0 from Molecular Simulations, Inc. Currently the technology is used to account for forces on spacecraft parts and to perform molecular analyses for pharmaceutical purposes. It permits the solution of molecular dynamics problems on a moderate workstation, as opposed to on a supercomputer.

2000-01-01

76

NASA Technical Reports Server (NTRS)

Dr. Donald Gilles, the Discipline Scientist for Materials Science in NASA's Microgravity Materials Science and Applications Department, demonstrates to Carl Dohrman a model of dendrites, the branch-like structures found in many metals and alloys. Dohrman was recently selected by the American Society for Metals International as their 1999 ASM International Foundation National Merit Scholar. The University of Illinois at Urbana-Champaign freshman recently toured NASA's materials science facilities at the Marshall Space Flight Center.

1999-01-01

77

NSDL National Science Digital Library

The Gyroscope example computes and displays the dynamics of gyroscope under the influence of a gravitational torque acting on the center of mass. The gyroscope is supported at one end and given an initial angular velocity component about its axis of symmetry and a component perpendicular to its axis of symmetry. The numerical solution shows the motion for all initial conditions including zero initial angular momentum. The model is designed to show the cycloidal motion (precession and nutation) of the gyroscope axle when the initial angular velocity is large. Users can very the position and radius of the spinning mass as well as the initial angle and can display the angular momentum, angular velocity, and torque vectors. A second window shows the elevation angle of the axle and the angular momentum vector. Â Units are chosen such that the total mass M and the acceleration of gravity g are one. The rotor is an ellipsoid with a uniform mass distribution and with major axes 2*R and minor axis R/5. The ellipsoid's moment of inertia through the center of mass is 4MR2/5 about the major axes and 26MR2/125 about the minor axis. The Gyroscope model is a supplemental simulation for the article "It Has to Go Down a Little, in Order to Go Around" by Svilen Kostov and Daniel Hammer in The Physics Teacher 49(4), 216-219 (2011) and has been approved by the authors and The Physics Teacher editor. The model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_csm_ch17_Gyroscope.jar file will run the program if Java is installed.

Christian, Wolfgang

2011-02-10

78

NSDL National Science Digital Library

Created by Kyle Siegrist of the University of Alabama-Huntsville, this is an online, interactive lesson on geometric models. The author provides examples, exercises, and applets which include Buffon's problems, Bertrand's paradox, and random triangles. Additionally, the author provides links to external resources for students wanting to engage further in this topic. This is simply one lesson in a series of seventeen. They are all easily accessible as the author has formated his site much like an online textbook.

Siegrist, Kyle

2009-02-23

79

NSDL National Science Digital Library

This highly visual model demonstrates the atomic theory of matter which states that a gas is made up of tiny particles of atoms that are in constant motion, smashing into each other. Balls, representing molecules, move within a cage container to simulate this phenomenon. A hair dryer provides the heat to simulate the heating and cooling of gas: the faster the balls are moving, the hotter the gas. Learners observe how the balls move at a slower rate at lower "temperatures."

Exploratorium, The

2013-01-30

80

NASA Astrophysics Data System (ADS)

The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction). Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.

Fossión, Rubén

2010-09-01

81

NSDL National Science Digital Library

This site uses linear models to demonstrate the change in bird populations on a barren island over time, supply and demand, and the natural cleaning of a polluted lake by fresh water over time. The problems are laid out and turned into both graphic and equation form in order to understand the rate of change happening in each scenario. There are also links to previously covered materials that can help student review material from past math lessons.

Wattenberg, Frank

1997-01-01

82

NASA Technical Reports Server (NTRS)

Automatic formal verification methods for finite-state systems, also known as model-checking, successfully reduce labor costs since they are mostly automatic. Model checkers explicitly or implicitly enumerate the reachable state space of a system, whose behavior is described implicitly, perhaps by a program or a collection of finite automata. Simple properties, such as mutual exclusion or absence of deadlock, can be checked by inspecting individual states. More complex properties, such as lack of starvation, require search for cycles in the state graph with particular properties. Specifications to be checked may consist of built-in properties, such as deadlock or 'unspecified receptions' of messages, another program or implicit description, to be compared with a simulation, bisimulation, or language inclusion relation, or an assertion in one of several temporal logics. Finite-state verification tools are beginning to have a significant impact in commercial designs. There are many success stories of verification tools finding bugs in protocols or hardware controllers. In some cases, these tools have been incorporated into design methodology. Research in finite-state verification has been advancing rapidly, and is showing no signs of slowing down. Recent results include probabilistic algorithms for verification, exploitation of symmetry and independent events, and the use symbolic representations for Boolean functions and systems of linear inequalities. One of the most exciting areas for further research is the combination of model-checking with theorem-proving methods.

Dill, David L.

1995-01-01

83

Students' Models of Curve Fitting: A Models and Modeling Perspective

ERIC Educational Resources Information Center

The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

Gupta, Shweta

2010-01-01

84

10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

85

New fission fragment distributions and r-process origin of the rare-earth elements.

Neutron star (NS) merger ejecta offer a viable site for the production of heavy r-process elements with nuclear mass numbers A?140. The crucial role of fission recycling is responsible for the robustness of this site against many astrophysical uncertainties, but calculations sensitively depend on nuclear physics. In particular, the fission fragment yields determine the creation of 110?A?170 nuclei. Here, we apply a new scission-point model, called SPY, to derive the fission fragment distribution (FFD) of all relevant neutron-rich, fissioning nuclei. The model predicts a doubly asymmetric FFD in the abundant A?278 mass region that is responsible for the final recycling of the fissioning material. Using ejecta conditions based on relativistic NS merger calculations, we show that this specific FFD leads to a production of the A?165 rare-earth peak that is nicely compatible with the abundance patterns in the Sun and metal-poor stars. This new finding further strengthens the case of NS mergers as possible dominant origin of r nuclei with A?140. PMID:24483647

Goriely, S; Sida, J-L; Lemaître, J-F; Panebianco, S; Dubray, N; Hilaire, S; Bauswein, A; Janka, H-T

2013-12-13

86

NSDL National Science Digital Library

This interactive activity allows users the ability to explore different representations for fractions and how they are equivalent to mixed numbers, decimals, and percentages. Users adjust the numerator (up to 100) and the denominator (1 to 25) in order to see a visual representation of the fraction. The visual representation can be seen as a length, area, region, or set model. Users also have the ability to keep track of the equivalent forms of fractions in a table. Instructions and exploration questions are given.

2011-01-01

87

Predictive Statistical Models for User Modeling

The limitations of traditional knowledge representation methods for modeling complex human behaviour led to the investigation of statistical models. Predictive statistical models enable the anticipation of certain aspects of human behaviour, such as goals, actions and preferences. In this paper, we motivate the development of these models in the context of the user modeling enterprise. We then review the two

Ingrid Zukerman; David W. Albrecht

2001-01-01

88

NASA Technical Reports Server (NTRS)

A major problem in the qualification of integrated circuit cells and in the development of adequate tests for the circuits is to lack of information on the nature and density of fault models. Some of this information is being obtained from the test structures. In particular, the Pinhole Array Capacitor is providing values for the resistance of gate oxide shorts, and the Addressable Inverter Matrix is providing values for parameter distributions such as noise margins. Another CMOS fault mode, that of the open-gated transistor, is examined and the state of the transistors assessed. Preliminary results are described for a number of open-gated structures such as transistors, inverters, and NAND gates. Resistor faults are applied to various CMOS gates and the time responses are noted. The critical value for the resistive short to upset the gate response was determined.

Sayah, H. R.; Buehler, M. G.

1985-01-01

89

Nuclear-charge distribution for A = 121 from thermal-neutron-induced fission of /sup 235/U

The fractional cumulative yield of /sup 121/Ag and the fractional independent yields of /sup 121/Cd, /sup 121/In, and /sup 121/Sn from thermal-neutron-induced fission of /sup 235/U were determined radiochemically to be 0.12 +- 0.05, 0.61 +- 0.09, 0.24 +- 0.08, and 0.03 +- 0.04, respectively. The yield values were used to determine the nuclear-charge-distribution parameters sigma/sub Z/ = 0.55 +- 0.10 and ..delta..Z = 0.50 +- 0.05 for A = 121. The sigma/sub Z/ for A = 121 is close to sigma/sub Z/$ = 0.52 +- 0.02 for high-yield fission products, and no evidence for an even-odd Z effect was found for A = 121. The positive ..delta..Z value, which corresponds to Z/sub P/ = 48.15, is similar to those for several higher mass numbers reported previously, and it is considerably greater than the negative values predicted by the scission-point theoretical model. The use of a separation distance between nascent fragments greater than 1.4 fm, the value used in the theoretical calculations, could reduce the discrepancy and could also account for the observed enhanced independent yields of tin fission products with Z/sub P/ near 50 (A = 126--129).

Robinson, L.; Wahl, A.C.; Semkow, T.M.; Norris, A.E.

1985-04-01

90

Charge distributions for the photofission of 235U and 238U with 12-30 MeV bremsstrahlung

NASA Astrophysics Data System (ADS)

A systematic study of the charge distribution for bremsstrahlung-induced photofission of 235U and 238U with the end point energies ranging from 12 to 30 MeV was performed using direct ?-ray spectrometry of irradiated uranium samples or of fission product catcherfoils, and also employing chemical separation techniques. For both fissioning systems the width of the charge distribution was found to be practically independent of the average excitation energy and the values obtained are in very good agreement with those reported in the literature for low-energy fission. The deviation of the most probable charge Zp from the unchanged charge density value ZUCD as a function of the fragment mass shows the influence of the 50-proton shell in the charge distributions and a higher charge-to-mass ratio of the light fragments independent of the compound nucleus excitation energy. For the necessary conversion of postneutron into preneutron masses, neutron emission curves, ?(m*), were deduced from previously measured postneutron and provisional mass distributions. Calculations following the scission-point model of Wilkins et al. and the predictions of the empirical relation of Nethaway reproduce very well the experimentally determined Zp behavior, except in the mass region affected by the Z=50 closed shell. NUCLEAR REACTIONS, FISSION 235,238U(?, F), E?=12, 15, 20, 30 MeV; measured product ?-ray spectra; deduced charge distributions, width, and most probable charges; calculated ?(m*) from measured provisional and postneutron mass distributions.

de Frenne, D.; Thierens, H.; Proot, B.; Jacobs, E.; de Gelder, P.; de Clercq, A.; Westmeier, W.

1982-10-01

91

Forward model nonlinearity versus inverse model nonlinearity

The issue of concern is the impact of forward model nonlinearity on the nonlinearity of the inverse model. The question posed is, "Does increased nonlinearity in the head solution (forward model) always result in increased nonlinearity in the inverse solution (estimation of hydraulic conductivity)?" It is shown that the two nonlinearities are separate, and it is not universally true that increased forward model nonlinearity increases inverse model nonlinearity. ?? 2007 National Ground Water Association.

Mehl, S.

2007-01-01

92

Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch model are discussed and the Rasch model is reformulated as a quasi-independence model. The

Hendrikus Kelderman

1984-01-01

93

A hierarchical program behavior model in a multitasking environment was proposed and applied to a cache multitasking model for performance evaluation. The hierarchical program behavior model consists of the task switching model, execution interval model, and the line (block) reference behavior model for each individual task. An execution interval is a continuous execution of a task between task switches. As

Makoto Kobayashi

1992-01-01

94

Magnetosphere Models les Modeles de Magnetosphere.

National Technical Information Service (NTIS)

The most recent magnetospheric models are reviewed. After a short overview of the particle environment, a synthetic survey of the problem is given. For each feature of magnetospheric modelling (boundary, current sheet, ring-current) the approaches used by...

J. C. Kosik

1977-01-01

95

Model selection for logistic regression models

NASA Astrophysics Data System (ADS)

Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

Duller, Christine

2012-09-01

96

Power Plant Performance Modeling: Dynamic Model Evaluation.

National Technical Information Service (NTIS)

The dynamic performance of the turbine and feedwater train of a 550-MW oil-fired plant has been modeled by two modeling systems, the Modular Modeling System (MMS) and the Reactor Transient Analysis System (RETRAN). This report documents the performance of...

P. N. DiDomenico S. W. W. Shor

1981-01-01

97

NASA Technical Reports Server (NTRS)

Cycle life regression model, cycle life prediction model, and acceleration factors are discussed. A method was presented to: (1) select a mathematical model; (2) determine model coefficients using accelerated test data; (3) test model fit of the accelerated test data; and (4) predict normal packs.

Schwartz, D.

1978-01-01

98

Modelling long-term dependencies in time series has proved very difficult to achieve with traditional machine-learning methods. This problem occurs when considering music data. In this paper, we introduce predictive models for melodies. We decompose melodic modelling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for

Jean-françois Paiement; Yves Grandvalet; Samy Bengio

2009-01-01

99

ERIC Educational Resources Information Center

Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

Frees, Edward W.; Kim, Jee-Seon

2006-01-01

100

NSDL National Science Digital Library

Models of the heart have been developed since 1960, starting with the discovery and modeling of potassium channels. The first models of calcium balance were made in the 1980s and have now reached a high degree of physiological detail. During the 1990s, these cell models were incorporated into anatomically detailed tissue and organ models.

Denis Noble (Oxford University Department of Physiology)

2004-08-01

101

NSDL National Science Digital Library

A discussion of different modeling techniques in computer graphics including t polygon mesh, parametric cubic curves and patches, implicit functions such as metaballs, procedural modeling (plants and flowers) and modeling transformations.

2003-02-15

102

Introduction to Structural Models.

National Technical Information Service (NTIS)

This paper introduces structural models for flight simulators to technical managers. It gives the rationale for using structural models, defines structural models, and discusses practices associated with their use. The paper provides an example of a struc...

J. Batman L. Howard B. Schelker

1992-01-01

103

NSDL National Science Digital Library

This study investigates the mental models that people construct about magnetic phenomena. The project involved students, physics teachers, engineers, and practitioners. The researchers propose five models following a progression from simple description to a field model. Contains 28 references.

Borges, A. T.; Gilbert, John; Tecnico, Colegio

2006-05-23

104

Educating with Aircraft Models

ERIC Educational Resources Information Center

Described is utilization of aircraft models, model aircraft clubs, and model aircraft magazines to promote student interest in aerospace education. The addresses for clubs and magazines are included. (SL)

Steele, Hobie

1976-01-01

105

... simulations with the help of customized programs called computational models. Different models address different questions. The ones ... 2004, a network of researchers has been building computational models of infectious disease outbreaks. The network is ...

106

Editor's Roundtable: Model behavior

NSDL National Science Digital Library

Models are manageable representations of objects, concepts, and phenomena, and are everywhere in science. Models are "thinking tools" for scientists and have always played a key role in the development of scientific knowledge. Models of the solar system,

Liftig, Inez

2010-11-01

107

NASA Astrophysics Data System (ADS)

A layered hierarchical memory model is constructed. The model can store families of correlated images ordered in the form of a hierarchical tree. The structure of the model is similar to that of the visual cortex of the brain.

Dotsenko, Viktor S.

1986-12-01

108

Statistical Software Composite Linear Models (Written by Stuart G. Baker) The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It

109

Geologic Framework Model Analysis Model Report

The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the repository design. These downstream models include the hydrologic flow models and the radionuclide transport models. All the models and the repository design, in turn, will be incorporated into the Total System Performance Assessment (TSPA) of the potential radioactive waste repository block and vicinity to determine the suitability of Yucca Mountain as a host for the repository. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 2.

R. Clayton

2000-12-19

110

NSDL National Science Digital Library

Hosted by the European Bioinformatics Institute, the BioModels Database is a collaborative, "new effort to develop a data resource that will allow biologist to store, search and retrieve published mathematical models of biological interests. The models in the BioModels Database are annotated and linked to relevant data resources, such as publications, databases of compounds and pathways, controlled vocabularies, etc." The website allows visitors to browse and search the Database for models. The site also provides information about submitting models for the Database. It should be noted that submitted models must undergo tests conducted by BioModels Database curators before they are incorporated. [NL

111

The class of gene linear models is extended to develop a class of nonparametric regression models known as generalized smooth models. The technique of local scoring is used to estimate a generalized smooth model and the estimation procedure based on locally weighted regression is shown to produce local likelihood estimates. The asymptotically correct distribution of the deviance difference is derived and its use in comparing the fits of generalized linear models and generalized smooth models is illustrated. The relationship between generalized smooth models and generalized additive models is discussed, also.

Glosup, J.

1992-07-23

112

Modeling of muscle fatigue using Hill's model.

A new model incorporating muscle fatigue has been developed to predict the effect of muscle fatigue on the force-time relationship of skeletal muscle by using the PAK-program. Differential equations in the incremental form have been implemented into Hill's muscle model. In order to describe the effect of muscle fatigue and recovery on skeletal muscle behaviors, a set of equations in terms of three phenomenological parameters which are a fatigue curve under sustained maximal activation, a recovery curve and an endurance function were developed. With reference to existing models and experimental results, the input parameters for fatigue curve under sustained maximal activation and endurance function were determined. The model has been investigated under an isometric condition. The effects of different shapes of the recovery curves have also been considered in this model. Validation of the model has been performed by comparing the predicted results with the experimental data from an existing literature. PMID:16179754

Tang, C Y; Stojanovic, B; Tsui, C P; Kojic, M

2005-01-01

113

Bayesian Model Selection in Factor Analytic Models

\\u000a Factor analytic models are widely used in social science applications to study latent traits, such as intelligence, creativity,\\u000a stress, and depression, that cannot be accurately measured with a single variable. In recent years, there has been a rise\\u000a in the popularity of factor models due to their flexibility in characterizing multivari-ate data. For example, latent factor\\u000a regression models have been

Joyee Ghoshand; David B. Dunson

114

In theory, the combination of mathematical modeling with experimental studies can be a powerful and compelling approach to understanding cell biology. In practice, choosing appropriate problems, identifying willing and able collaborators, and publishing the resulting research can be remarkably challenging. To provide perspective on the question of whether and when to combine modeling and experiments, a panel of experts at the 2010 ASCB Annual Meeting shared their personal experiences and advice on how to use modeling effectively.

Fletcher, Daniel A.

2011-01-01

115

Modelling Holocene climate trends: A model intercomparison

NASA Astrophysics Data System (ADS)

For the paleomodel intercomparison, we compared the results from scenarios with identical forcing for the mid-to-late Holocene period: varying Earth's orbital parameters, fixed level of greenhouse gas concentrations, fixed land-sea mask and orography. 18 paleoclimate modelling groups are involved in this initiative, working on transient Holocene simulations. One major issue of both the modelling and reconstruction side were the quantification of uncertainties, and the evaluation of trend and variability patterns beyond a single proxy and beyond a single model simulation. The goal is to obtain robust results of trend patterns, seasonality changes, as well as transitions on a regional scale. The major objective is to investigate the spatio-temporal pattern of temperature and precipitation changes during Holocene as derived from integrations with a set comprehensive global climate models (GCMs), Earth system models of intermediate complexity (EMICs), as well as conceptual-statistical models. In the conceptual-statistical model by Laepple and Lohmann (2009) a rigorous simple concept is proposed: The temperature response on astronomical timescales has the same function as the response to seasonal insolation variations. The general pattern of surface temperatures in the models shows a high latitude cooling and a low latitude warming. Our analysis shows common patterns of temperature changes, especially for the respective summer seasons. This is a common feature for all model considered. Due to strong differences in atmospheric dynamics and sea ice, we find significant differences in the winter patterns. The precipitation trends show a clear difference between GCMs and EMICs mainly because the treatment of the hydological cycle in the tropics. Most models show a southward movement of the ITCZ. Using statistical analysis of the model variability modes and their amplitude during the Holocene, we reveal a strong heterogeneity in temperature and precipitation pattern and no common response in trend and variability, although a tendency towards NAO- and SOI- (El Nino-like) is detected. Our approach is to obtain, through ensemble runs for climate model output, a range of solutions that can be then compared and evaluated for their consistency with the range of uncertainty given by the palaeoclimate proxies. This approach allows a much more congruent way of comparison between proxy data and model result because both investigations will provide a range of possible climate change where the errors in the estimates are accounted for. We compare the ocean temperature evolution of the Holocene as simulated by climate models and reconstructed from marine temperature proxies. Independently of the choice of the climate model, we observe significant mismatches between modelled and reconstructed amplitudes in the trends for the last 6000 years.

Lohmann, Gerrit

2013-04-01

116

The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

M. A. Wasiolek

2003-10-27

117

The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

D. W. Wu

2003-07-16

118

ERIC Educational Resources Information Center

This paper focuses on one method used to introduce model design and creation using StarLogo to a group of high school teachers. Teachers with model-building skills can easily customize modeling environments for their classes. More importantly, model building can enable teachers to approach their curricula from a more holistic perspective, as well…

Klopfer, Eric; Colella, Vanessa

119

Optimal predictive model selection

Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper

Maria Maddalena Barbieri; James O. Berger

2004-01-01

120

Optimal predictive model selection

Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper

Maria Maddalena Barbieri; James O. Berger

2002-01-01

121

Model Engineering using Multimodeling.

National Technical Information Service (NTIS)

We study the simultaneous use of multiple modeling techniques in the design of embedded systems. We begin with a pre-existing Statecharts model of a simple case study, a traffic light for a pedestrian crossing. This model combines two distinct models of c...

C. Brooks C. P. Cheng E. A. Lee R. Von Hanxleden T. H. Feng

2008-01-01

122

Instructional Models (Part III).

ERIC Educational Resources Information Center

Describes five models for instructional design, including the ASSURE Model (analyze learners, state objectives, select methods, media and materials, utilize media and materials, require learner participation, and evaluate); the Learning Cycle; Instructional Analysis Model; Integrated Model for Teaching Library Skills; and Instructional…

Callison, Daniel

2002-01-01

123

Towards connectionist language models

In problems such as automatic speech recognition and machine translation, where the system re- sponse must be a sentence in a given language, language models are employed in order to improve system performance. These language models are usually -gram models (for instance, bigram or trigram models) which are estimated from large text databases using the occurrence frequencies of these -grams.

Mar ´ ia; Jose Castro; Francisco Casacuberta; Federico Prat

1999-01-01

124

NASA Astrophysics Data System (ADS)

The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

Rahmani, Fouad Lazhar

2010-11-01

125

Model checking and abstraction

We describe a method for using abstraction to reduce the complexity of temporal-logic model checking. Using techniques similar to those involved in abstract interpretation, we construct an abstract model of a program without ever examining the corresponding unabstracted model. We show how this abstract model can be used to verify properties of the original program. We have implemented a system

Edmund M. Clarke; Orna Grumberg; David E. Long

1994-01-01

126

Model checking and abstraction

We describe a method for using abstraction to reduce the complexity of temporal logic model checking. The basis of this method is a way of constructing an abstract model of a program without ever examining the corresponding unabstracted model. We show how this abstract model can be used to verify properties of the original program. We have implemented a system

Edmund M. Clarke; Orna Grumberg; David E. Long

1992-01-01

127

This paper investigates the use of formal mathematical models in the design of interactive systems and argues for the development of generic models that describe the behaviour of a class of interactive systems.In recent years a number of authors have suggested methods for modelling interactive systems using notations and frameworks drawn from software engineering mathematics. We argue that these models

Andrew M. Dearden

1997-01-01

128

. A hierarchy of models that capture realistic aspects of reactive, realtime,and hybrid systems is introduced. On the most abstract level, the qualitative(non-quantitative) model of reactive systems captures the temporal precedence aspectof time. A more refined model is that of real-time systems , which representsthe metric aspect of time. The third and most detailed model is that of hybridsystems ,

Zohar Manna; Amir Pnueli

1993-01-01

129

Generalized Gaussian process models

We propose a generalized Gaussian process model (GGPM), which is a unifying framework that encompasses many existing Gaussian process (GP) models, such as GP regression, classification, and counting. In the GGPM framework, the observation likelihood of the GP model is itself parameterized using the exponential family distribution. By deriving approximate inference algorithms for the generalized GP model, we are able

Antoni B. Chan; Daxiang Dong

2011-01-01

130

NASA Astrophysics Data System (ADS)

The average multiplicity of gamma rays emitted by fragments originating from the fission of 226Th nuclei formed via a complete fusion of 18O and 208Pb nuclei at laboratory energies of 18O projectile ions in the range E lab = 78 198.5 MeV is measured and analyzed. The total spins of fission fragments are found and used in an empirical analysis of the energy dependence of the anisotropy of these fragments under the assumption that their angular distributions are formed in the vicinity of the scission point. The average temperature of compound nuclei at the scission point and their average angular momenta in the entrance channel are found for this analysis. Also, the moments of inertia are calculated for this purpose for the chain of fissile thorium nuclei at the scission point. All of these parameters are determined at the scission point by means of three-dimensional dynamical calculations based on Langevin equations. A strong alignment of fragment spins is assumed in analyzing the anisotropy in question. In that case, the energy dependence of the anisotropy of fission fragments is faithfully reproduced at energies in excess of the Coulomb barrier ( E c.m. - E B ? 30 MeV). It is assumed that, as the excitation energy and the angular momentum of a fissile nucleus are increased, the region where the angular distributions of fragments are formed is gradually shifted from the region of nuclear deformations in the vicinity of the saddle point to the region of nuclear deformations in the vicinity of the scission point, the total angular momentum of the nucleus undergoing fission being split into the orbital component, which is responsible for the anisotropy of fragments, and the spin component. This conclusion can be qualitatively explained on the basis of linear-response theory.

Rusanov, A. Ya.; Adeev, G. D.; Itkis, M. G.; Karpov, A. V.; Nadtochy, P. N.; Pashkevich, V. V.; Pokrovsky, I. V.; Salamatin, V. S.; Chubarian, G. G.

2007-10-01

131

The average multiplicity of gamma rays emitted by fragments originating from the fission of {sup 226}Th nuclei formed via a complete fusion of {sup 18}O and {sup 208}Pb nuclei at laboratory energies of {sup 18}O projectile ions in the range E{sub lab} = 78-198.5 MeV is measured and analyzed. The total spins of fission fragments are found and used in an empirical analysis of the energy dependence of the anisotropy of these fragments under the assumption that their angular distributions are formed in the vicinity of the scission point. The average temperature of compound nuclei at the scission point and their average angular momenta in the entrance channel are found for this analysis. Also, the moments of inertia are calculated for this purpose for the chain of fissile thorium nuclei at the scission point. All of these parameters are determined at the scission point by means of three-dimensional dynamical calculations based on Langevin equations. A strong alignment of fragment spins is assumed in analyzing the anisotropy in question. In that case, the energy dependence of the anisotropy of fission fragments is faithfully reproduced at energies in excess of the Coulomb barrier (E{sub c.m.} - E{sub B} {>=} 30 MeV). It is assumed that, as the excitation energy and the angular momentum of a fissile nucleus are increased, the region where the angular distributions of fragments are formed is gradually shifted from the region of nuclear deformations in the vicinity of the saddle point to the region of nuclear deformations in the vicinity of the scission point, the total angular momentum of the nucleus undergoing fission being split into the orbital component, which is responsible for the anisotropy of fragments, and the spin component. This conclusion can be qualitatively explained on the basis of linear-response theory.

Rusanov, A. Ya. [National Nuclear Center of the Republic of Kazakhstan, Institute of Nuclear Physics (Kazakhstan)], E-mail: rusanov@inp.kz; Adeev, G. D. [Omsk State University (Russian Federation); Itkis, M. G.; Karpov, A. V. [Joint Institute for Nuclear Research (Russian Federation); Nadtochy, P. N. [Omsk State University (Russian Federation); Pashkevich, V. V.; Pokrovsky, I. V.; Salamatin, V. S. [Joint Institute for Nuclear Research (Russian Federation); Chubarian, G. G. [Texas A and M University, Cyclotron Institute (United States)

2007-10-15

132

The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

C.F. Ahlers, H.H. Liu

2001-12-18

133

We present a new model of the homogeneous BSSRDF based on large-scale simulations. Our model captures the appearance of materials that are not accurately represented using existing single scattering models or multiple isotropic scattering models (e.g. the diffusion approximation). We use an analytic function to model the 2D hemispherical distribution of exitant light at a point on the surface, and

Craig Donner; Jason Lawrence; Ravi Ramamoorthi; Toshiya Hachisuka; Henrik Wann Jensen; Shree K. Nayar

2009-01-01

134

NASA Astrophysics Data System (ADS)

We have investigated the "weak chaos" exponent to see if it can be considered as a classification parameter of different sandpile models. Our simulation results show that the (Abelian) BTW sandpile model, the (non-Abelian) Zhang model, and the ("Abelian") Manna model possesses different "weak chaos" exponents, so they may belong to different universality classes. Finally, we show that getting off the critical point destroys this behavior in these models.

Moghimi-Araghi, Saman; Mollabashi, Ali

135

Atmospheric modeling in complex terrain

Los Alamos investigators have developed several models which are relevant to modeling Mexico City air quality. The collection of models includes: meteorological models, dispersion models, air chemistry models, and visibility models. The models have been applied in several different contexts. They have been developed primarily to address the complexities posed by complex terrain. HOTMAC is the meteorological model which requires

M. D. Williams; G. E. Streit

1990-01-01

136

Model Shrinkage for Discriminative Language Models

NASA Astrophysics Data System (ADS)

This paper describes a technique for overcoming the model shrinkage problem in automatic speech recognition (ASR), which allows application developers and users to control the model size with less degradation of accuracy. Recently, models for ASR systems tend to be large and this can constitute a bottleneck for developers and users without special knowledge of ASR with respect to introducing the ASR function. Specifically, discriminative language models (DLMs) are usually designed in a high-dimensional parameter space, although DLMs have gained increasing attention as an approach for improving recognition accuracy. Our proposed method can be applied to linear models including DLMs, in which the score of an input sample is given by the inner product of its features and the model parameters, but our proposed method can shrink models in an easy computation by obtaining simple statistics, which are square sums of feature values appearing in a data set. Our experimental results show that our proposed method can shrink a DLM with little degradation in accuracy and perform properly whether or not the data for obtaining the statistics are the same as the data for training the model.

Oba, Takanobu; Hori, Takaaki; Nakamura, Atsushi; Ito, Akinori

137

Aggregation and Model Construction for Volatility Models

In this paper we will rigourously study some of the properties of continuous time stochastic volatility models. We have five main results, including: the stochastic volatility class can be linked to Cox process based models of tick-by-tick financial data; we characterise the moments, autocorrelation function and spectrum of squared returns; based only on discrete time returns, we give a simple

O. E. Barndorf-Nielsen; Neil Shephard

1998-01-01

138

Coupling global biome models with climate models.

National Technical Information Service (NTIS)

The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME mo...

M. Claussen

1994-01-01

139

Bounded LTL Model Checking with Stable Models

In this paper bounded model checking of asynchronous concurrent systems is introduced as a promising application area for answer set programming. As the model of asynchronous systems a generalization of communicating automata, 1-safe Petri nets, are used. It is shown how a 1-safe Petri net and a requirement on the behavior of the net can be translated into a logic

Keijo Heljanko; Ilkka Niemelä

2001-01-01

140

Bounded LTL Model Checking with Stable Models

In this paper bounded model checking of asynchronous con- current systems is introduced as a promising application area for answer set programming. As the model of asynchronous systems a generalization of communicating automata, 1-safe Petri nets, are used. It is shown how a 1-safe Petri net and a requirement on the behavior of the net can be translated into a

Keijo Heljanko; Ilkka Niemelä

2003-01-01

141

NASA Astrophysics Data System (ADS)

Climate change projections are often given as equally weighted averages across ensembles of climate models, despite the fact that the sampling of the underlying ensembles is unclear. We show that a hierarchical clustering of a metric of spatial and temporal variations of either surface temperature or precipitation in control simulations can capture many model relationships across different ensembles. Strong similarities are seen between models developed at the same institution, between models sharing versions of the same atmospheric component, and between successive versions of the same model. A perturbed parameter ensemble of a model appears separate from other structurally different models. The results provide insight into intermodel relationships, into how models evolve through successive generations, and suggest that assuming model independence in such ensembles of opportunity is not justified.

Masson, D.; Knutti, R.

2011-04-01

142

Model Validation Status Review

The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.

E.L. Hardin

2001-11-28

143

As the field of phylogeography has continued to move in the model-based direction, researchers continue struggling to construct useful models for inference. These models must be both simple enough to be tractable yet contain enough of the complexity of the natural world to make meaningful inference. Beyond constructing such models for inference, researchers explore model space and test competing models with the data on hand, with the goal of improving the understanding of the natural world and the processes underlying natural biological communities. Approximate Bayesian computation (ABC) has increased in recent popularity as a tool for evaluating alternative historical demographic models given population genetic samples. As a thorough demonstration, Pelletier & Carstens () use ABC to test 143 phylogeographic submodels given geographically widespread genetic samples from the salamander species Plethodon idahoensis (Carstens et al. ) and, in so doing, demonstrate how the results of the ABC model choice procedure are dependent on the model set one chooses to evaluate. PMID:24931159

Hickerson, Michael J

2014-06-01

144

Modeling nonstationary longitudinal data.

An important theme of longitudinal data analysis in the past two decades has been the development and use of explicit parametric models for the data's variance-covariance structure. A variety of these models have been proposed, of which most are second-order stationary. A few are flexible enough to accommodate nonstationarity, i.e., nonconstant variances and/or correlations that are not a function solely of elapsed time between measurements. We review five nonstationary models that we regard as most useful: (1) the unstructured covariance model, (2) unstructured antedependence models, (3) structured antedependence models, (4) autoregressive integrated moving average and similar models, and (5) random coefficients models. We evaluate the relative strengths and limitations of each model, emphasizing when it is inappropriate or unlikely to be useful. We present three examples to illustrate the fitting and comparison of the models and to demonstrate that nonstationary longitudinal data can be modeled effectively and, in some cases, quite parsimoniously. In these examples, the antedependence models generally prove to be superior and the random coefficients models prove to be inferior. We conclude that antedependence models should be given much greater consideration than they have historically received. PMID:10985205

Núñez-Antón, V; Zimmerman, D L

2000-09-01

145

Seminar: The Bayesian Linear Model { Optimal Predictive Model Selection

Often, model selection is done to choose the best model for prediction. It is commonly perceived that the optimal prediction model is the model with highest posterior probability. This is not nec- essarily the case though. Often the optimal predictive model is the median probability model. This model often diers from the highest probability model.

Daniel Heersink

146

Antibody modeling assessment II. Structures and models.

To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. Proteins 2014; 82:1563-1582. © 2014 Wiley Periodicals, Inc. PMID:24633955

Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L

2014-08-01

147

Modeling planetary magnetospheres

NASA Technical Reports Server (NTRS)

Recent advances in the development of models of the macroscopic properties of the terrestrial and planetary magnetospheres are reviewed. Particular attention is given to work on semiempirical models of magnetic and electric fields in the earth's magnetosphere, the modeling of magnetospheric storms and substorms in the inner magnetosphere, and the self-consistent modeling of processes in the magnetotail, including reconnection. Magnetohydrodynamic models of the dayside magnetosphere and the magnetotail which are based on calculations of the interaction of the solar wind with the magnetosphere are also considered. Finally, work on the modeling of the magnetospheres of Mercury, Venus, Jupiter, Saturn and Uranus is presented.

Walker, R. J.

1983-01-01

148

A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.

Reiter, E.R.

1980-01-01

149

NASA Technical Reports Server (NTRS)

The development of new postprocessing software of the climate modeling group is summarized. Code, test, and perform simulations with global general circulation models are described. The models improve understanding and ability to predict the vagaries of weather and climate. To learn from and utilize the model results, it is necessary to create elaborate postprocessing software to allow analysis of the large volume of data produced. The models produce sigma history tapes. The sigma history records are interpolated to pressure history records, which are written on a pressure history tape. The model results are analyzed on pressure surfaces, with snap shots or time averages.

Abeles, J.; Pittarelli, E.; Randall, D. A.

1981-01-01

150

NASA Technical Reports Server (NTRS)

A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed to two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.

Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.

1986-01-01

151

NASA Technical Reports Server (NTRS)

A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed of two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.

Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.

1987-01-01

152

Compositional Belief Function Models

Abstract—Analogously to Graphical Markov models, also Compositional,models,serve as an,efficient tool for multidi- mensional,models,representation. The main,idea of the latter models,resembles,a jig saw puzzle: Multidimensional,models,are assembled (composed) from a large number of small pieces, from a large number,of low-dimensional,models. Originally they were designed,to represent multidimensional,probability distributions. In this paper,they will be used,to represent,multidimensional belief functions (or more precisely, multidimensional basic belief assignments)

Radim Jirousek

2008-01-01

153

NASA Technical Reports Server (NTRS)

A new vector Preisach model, called the Reduced Vector Preisach model (RVPM), was developed for fast computations. This model, derived from the Simplified Vector Preisach model (SVPM), has individual components that like the SVPM are calculated independently using coupled selection rules for the state vector computation. However, the RVPM does not require the rotational correction. Therefore, it provides a practical alternative for computing the magnetic susceptibility using a differential approach. A vector version, using the framework of the DOK model, is implemented. Simulation results for the reduced vector Preisach model are also presented.

Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)

2002-01-01

154

NSDL National Science Digital Library

This on-line dynamic model from Horticulture Research International (HRI) "simulates the growth response of 25 crops to applications of nitrogen fertilizer." The model incorporates the effects of climate, organic material and leaching. Users select a region of the world, enter input into the model (e.g., crop type, date of sowing, weather conditions, nitrogen applications, etc.), and run the model for numeric and graphical output. Substantial effort has been made to describe the model's behavior and to present useful output; interested users may select the "advanced" or "detailed" options for further information on each model.

Aycott, Ann.; Greenwood, Duncan J.; Rahn, Clive R.

155

NSDL National Science Digital Library

The Marine Modeling and Analysis Branch (MMAB) of the Environmental Modeling Center is responsible for the development of improved numerical weather and marine prediction modeling systems. These models provide analysis and real-time forecast guidance on marine meteorological, oceanographic, and cryospheric parameters over the global oceans and coastal areas of the US. This site provides access to MMAB modeling tools for ocean waves (including an interactive presentation,) sea ice, marine meteorology, sea surface temperature and more. The site also features a mailing list, bibliography of publications, and information about modeling products still in the experimental and development phases.

National Centers For Environmental Prediction, National O.

156

Modeling Guru: Knowledge Base for NASA Modelers

NASA Astrophysics Data System (ADS)

Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the same document, even allowing the author to select those who may edit and approve the document. To maintain knowledge integrity, all documents are moderated before they are visible to the public. Modeling Guru, running on Clearspace by Jive Software, has been an active resource to the NASA modeling and HEC communities for more than a year and currently has more than 100 active users. SIVO will soon install live instant messaging support, as well as a user-customizable homepage with social-networking features. In addition, SIVO plans to implement a large dataset/file storage capability so that users can quickly and easily exchange datasets and files with one another. Continued active community participation combined with periodic software updates and improved features will ensure that Modeling Guru remains a vibrant, effective, easy-to-use tool for the NASA scientific community.

Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

2009-05-01

157

A future of the model organism model

Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. However, researchers must take special care and implement new resources to enable the newest members of the community to engage fully with the remarkable legacy of information in these fields.

Rine, Jasper

2014-01-01

158

NASA Technical Reports Server (NTRS)

During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.

Ensey, Tyler S.

2013-01-01

159

Digital Troposcatter Performance Model.

National Technical Information Service (NTIS)

The development of a computer prediction model for digital troposcatter communication system design is described. Propagation and modem performance are modeled. These include Path Loss and RSL distributions for troposcatter propagation and mixed troposcat...

A. Malaga J. Fetteroll P. Monsen S. Parl S. Tolman

1983-01-01

160

NASA Technical Reports Server (NTRS)

This video explores the world of modeling at the NASA Johnson Space Center. Artisans create models, large and small, to help scientists and engineers make final design modifications before building more costly prototypes.

1991-01-01

161

NSDL National Science Digital Library

In this activity, learners make a 3-D model of DNA using paper and toothpicks. While constructing this model, learners will explore the composition and structure of DNA. The activity also gives suggestions for alternate materials and challenges to explore.

History, American M.

2012-06-26

162

... construction of a global Geospace General Circulation Model (GGCM) with predictive capability. This ... global change is the development of general circulation models (GCMs) that can be used to study the ...

163

We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...

164

ERIC Educational Resources Information Center

Presents an activity in which models help students visualize both the DNA process and transcription. After constructing DNA, RNA messenger, and RNA transfer molecules; students model cells, protein synthesis, codons, and RNA movement. (MDH)

Brinner, Bonnie

1992-01-01

165

Defines the objectives of the Liquid Fuels Market Model (LFMM), describes its basic approach, and provides detail on how it works. This report is intended as a reference document for model analysts, users, and the public.

John Powell

2013-12-17

166

Mathematical Modeling Using MATLAB.

National Technical Information Service (NTIS)

Mathematical modeling forms a bridge between the study of mathematics and the application of mathematics with the intent of explaining or predicting real world behavior. In their book A First Course in Mathematical Modeling, Frank R. Giordano, Maurice D. ...

D. D. Phillips

1998-01-01

167

Advanced Spectral Modeling Development.

National Technical Information Service (NTIS)

This report describes the results of a basic research program to develop advanced spectral modeling techniques to treat a variety of current topics in spectroscopy and radiative transfer relevant to the modeling of atmospheric transmission and radiance fi...

R. G. Isaacs R. D. Worsham W. O. Gallery S. A. Clough J. L. Moncet

1992-01-01

168

NASA Technical Reports Server (NTRS)

A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

Agena, S. M.; Pusey, M. L.; Bogle, I. D.

1999-01-01

169

National Technical Information Service (NTIS)

AFRL/RHDO has developed a configurable, laser-tissue interaction model that includes components from various areas of Biophysics. The model predicts heat transfer in biological tissue, in either one-dimension or two- dimensional cylindrical coordinates, a...

G. D. Buffington I. C. Clark L. J. Irvin P. D. Maseberg R. J. Thomas

2007-01-01

170

National Technical Information Service (NTIS)

These lecture notes are intended as a pedagogical introduction to several popular extensions of the standard model of strong and electroweak interactions. The topics include the Higgs sector, the left-right symmetric model, grand unification and supersymm...

F. Cuypers

1997-01-01

171

Computer modeling of detonators.

National Technical Information Service (NTIS)

A mathematical model of detonators which describes the resistance of the exploding bridgewire or exploding foil initiator as a function of energy deposition will be described. This model includes many parameters that can be adjusted to obtain a close fit ...

C. M. Furnberg

1994-01-01

172

Viscoelastic Finite Difference Modeling.

National Technical Information Service (NTIS)

Real earth media disperse and attenuate propagating waves. This anelastic behavior can be well described by a viscoelastic model. We have developed a finite difference simulator to model wave propagation in viscoelastic media. The finite difference method...

J. O. Blanch J. O. Robertsson W. W. Symes

1993-01-01

173

Preclinical Models of Depression.

National Technical Information Service (NTIS)

New animal models of human depression, especially endogenous depression, were developed. Depression was induced by means other than drugs. The model was validated by behavioral, neuroendocrine and neurochemical resemblances to the human disorder and by re...

B. J. Carroll

1983-01-01

174

Modeling Fluid Structure Interaction.

National Technical Information Service (NTIS)

The principal goal of this program is on integrating experiments with analytical modeling to develop physics-based reduced-order analytical models of nonlinear fluid-structure interactions in articulated naval platforms. The critical research path for thi...

H. Benaroya T. Wei

2000-01-01

175

National Technical Information Service (NTIS)

The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the resu...

2005-01-01

176

National Technical Information Service (NTIS)

A mathematical model has been developed for designing boilers of district heating size. Grate boilers, which use direct combustion method of solid domestic fuels, are sized thermally with the model, based on the information on fuel, and the desired boiler...

M. Miettinen J. Huotari D. Asplund

1984-01-01

177

Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...

178

NSDL National Science Digital Library

Arizona State University's modeling instruction and software development research. This approach to reform of curriculum design and teaching methodology has been guided by a Modeling Theory of Physics Instruction.

Hestenes, David

2003-10-10

179

NASA Technical Reports Server (NTRS)

Model support system and instumentation cabling of the 1% scale X-33 reaction control system model. Installed in the Unitary Plan Wind Tunnel for supersonic testing. In building 1251, test section #2.

1998-01-01

180

ERIC Educational Resources Information Center

Reports on a model of environmental education that aims to encourage greater attachment to the bioregion of Arcadia. The model results from cooperation within a village community and addresses the environmental education of people of all ages. (DDR)

Pruneau, Diane; Chouinard, Omer; Arsenault, Charline

1998-01-01

181

One Dimensional Thermal Model Modele Thermique Unidimensionnel.

National Technical Information Service (NTIS)

A natural atmospheric model for the vertical temperature distribution as a function of the main physical parameters of the atmosphere and of the vertical profiles of the optical active constituents is presented. Topics include theoretical study of radiati...

J. Bensimon B. Dehove

1977-01-01

182

Aerosol Modeling for the Global Model Initiative

NASA Technical Reports Server (NTRS)

The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

Weisenstein, Debra K.; Ko, Malcolm K. W.

2001-01-01

183

PREDICTIVE MODELS. Enhanced Oil Recovery Model

PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

Ray, R.M. [DOE Bartlesville Energy Technology Technology Center, Bartlesville, OK (United States)

1992-02-26

184

Current Animal Models: Cotton Rat Animal Model

The cotton rat ( Sigmodon hispidus ) model has proven to be a suitable small animal model for measles virus pathogenesis to fill the niche between tissue culture\\u000a and studies in macaques. Similar to mice, inbred cotton rats are available in a microbiologically defined quality with an\\u000a ever-increasing arsenal of reagents and methods available for the study of infectious diseases.

S. Niewiesk

185

Solid Waste Projection Model: Model user's guide

The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab.

Stiles, D.L.; Crow, V.L.

1990-08-01

186

Likelihood-based regression models such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariates $X_1, X_2, \\\\cdots, X_p$. We introduce the class of generalized additive models which replaces the linear form $\\\\sum \\\\beta_jX_j$ by a sum of smooth functions $\\\\sum s_j(X_j)$. The $s_j(\\\\cdot)$'s are unspecified functions that are

Trevor Hastie; Robert Tibshirani

1986-01-01

187

We present a method to compute the conditional distribution of a statistical shape model given partial data. The result is a "posterior shape model", which is again a statistical shape model of the same form as the original model. This allows its direct use in the variety of algorithms that include prior knowledge about the variability of a class of shapes with a statistical shape model. Posterior shape models then provide a statistically sound yet easy method to integrate partial data into these algorithms. Usually, shape models represent a complete organ, for instance in our experiments the femur bone, modeled by a multivariate normal distribution. But because in many application certain parts of the shape are known a priori, it is of great interest to model the posterior distribution of the whole shape given the known parts. These could be isolated landmark points or larger portions of the shape, like the healthy part of a pathological or damaged organ. However, because for most shape models the dimensionality of the data is much higher than the number of examples, the normal distribution is singular, and the conditional distribution not readily available. In this paper, we present two main contributions: First, we show how the posterior model can be efficiently computed as a statistical shape model in standard form and used in any shape model algorithm. We complement this paper with a freely available implementation of our algorithms. Second, we show that most common approaches put forth in the literature to overcome this are equivalent to probabilistic principal component analysis (PPCA), and Gaussian Process regression. To illustrate the use of posterior shape models, we apply them on two problems from medical image analysis: model-based image segmentation incorporating prior knowledge from landmarks, and the prediction of anatomically correct knee shapes for trochlear dysplasia patients, which constitutes a novel medical application. Our experiments confirm that the use of conditional shape models for image segmentation improves the overall segmentation accuracy and robustness. PMID:23837968

Albrecht, Thomas; Lüthi, Marcel; Gerig, Thomas; Vetter, Thomas

2013-12-01

188

Mathematical circulatory system model

NASA Technical Reports Server (NTRS)

A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.

Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)

2010-01-01

189

SPATIAL ECONOMETRIC STAR MODELS

Spatial regression models incorporating non-stationarity in the regression coefficients are popular. In this paper we propose a family of spatial Smooth Transition AutoRegressive (STAR) models inspired by analogous nonlinear approaches developed in the time series literature. Spatial STAR models constitute a parsimonious, easy-to-estimate approach to modeling nonlinear spatial parameter variation and endogenous detection of spatial regimes. A distinct advantage of

Raymond J. G. M. Florax; Valerien O. Pede; Matthew T. Holt

190

\\u000a One of the most common ways to model large networks of neurons is to use a simplification called a firing rate model. Rather\\u000a than track the spiking of every neuron, instead one tracks the averaged behavior of the spike rates of groups of neurons within\\u000a the circuit. These models are also called population models since they can represent whole populations

G. Bard Ermentrout; David H. Terman

191

Interpreting cointegrated models

Error-correction models for cointegrated economic variables are commonly interpreted as reflecting partial adjustment of one variable to another. We show that error-correction models may also arise because one variable forecasts another. Reduced-form estimates of error-correction models cannot be used to distinguish these interpretations. In an application, we show that the estimated coefficients in the Marsh-Merton (1987) error-correction model of dividend

John Y. Campbell; Robert J. Shiller

1988-01-01

192

Modeling of spacecraft charging

NASA Technical Reports Server (NTRS)

Three types of modeling of spacecraft charging are discussed: statistical models, parametric models, and physical models. Local time dependence of circuit upset for DoD and communication satellites, and electron current to a sphere with an assumed Debye potential distribution are presented. Four regions were involved in spacecraft charging: (1) undisturbed plasma, (2) plasma sheath region, (3) spacecraft surface, and (4) spacecraft equivalent circuit.

Whipple, E. C., Jr.

1977-01-01

193

The purpose of this Model Report is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Office of Repository Development (ORD). The UZ contains the unsaturated rock layers overlying the repository and host unit, which constitute a natural barrier to flow, and the unsaturated rock layers below the repository which constitute a natural barrier to flow and transport. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.10.8 [under Work Package (WP) AUZM06, Climate Infiltration and Flow], and Section I-1-1 [in Attachment I, Model Validation Plans]). In Section 4.2, four acceptance criteria (ACs) are identified for acceptance of this Model Report; only one of these (Section 4.2.1.3.6.3, AC 3) was identified in the TWP (BSC 2002 [160819], Table 3-1). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, and drift-scale and mountain-scale coupled-process models from the UZ Flow, Transport and Coupled Processes Department in the Natural Systems Subproject of the Performance Assessment (PA) Project. The Calibrated Properties Model output will also be used by the Engineered Barrier System Department in the Engineering Systems Subproject. The Calibrated Properties Model provides input through the UZ Model and other process models of natural and engineered systems to the Total System Performance Assessment (TSPA) models, in accord with the PA Strategy and Scope in the PA Project of the Bechtel SAIC Company, LLC (BSC). The UZ process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions. UZ flow is a TSPA model component.

J. Wang

2003-06-24

194

NASA Astrophysics Data System (ADS)

We obtain a highly efficient global climate model by defining a sector version (120° of longitude) of the coarse resolution Goddard Institute for Space Studies model II. The geography of Wonderland is chosen such that the amount of land as a function of latitude is the same as on Earth. We show that the zonal mean climate of the Wonderland model is very similar to that of the parent model II.

Hansen, J.; Ruedy, R.; Lacis, A.; Russell, G.; Sato, M.; Lerner, J.; Rind, D.; Stone, P.

1997-03-01

195

Aim of this research was to develop a computer based mathematical model of motor vehicle dynamics. Mainly the tire, suspension and chassis behaviour have been modelled in detail. Other components like engine and drive train are modeled in a very simplified manner. The model provides steer, throttle and brake controls as the means to drive the vehicle.\\u000aDriving instructions can

Satyen Vyas

2008-01-01

196

Modeling physical growth using mixed effects models

This paper demonstrates the use of mixed effects models for characterizing individual and sample average growth curves based on serial anthropometric data. These models are an advancement over conventional general linear regression because they effectively handle the hierarchical nature of serial growth data. Using body weight data on 70 infants in the Born in Bradford study, we demonstrate how a mixed effects model provides a better fit than a conventional regression model. Further, we demonstrate how mixed effects models can be used to explore the influence of environmental factors on the sample average growth curve. Analyzing data from 183 infant boys (aged 3 to 15 months) from rural South India, we show how maternal education shapes infant growth patterns as early as within the first six months of life. The presented analyses highlight the utility of mixed effects models for analyzing serial growth data because they allow researchers to simultaneously predict individual curves, estimate sample average curves, and investigate the effects of environmental exposure variables.

Johnson, William; Balakrishna, Nagalla; Griffiths, Paula L

2012-01-01

197

Modeling physical growth using mixed effects models.

This article demonstrates the use of mixed effects models for characterizing individual and sample average growth curves based on serial anthropometric data. These models are advancement over conventional general linear regression because they effectively handle the hierarchical nature of serial growth data. Using body weight data on 70 infants in the Born in Bradford study, we demonstrate how a mixed effects model provides a better fit than a conventional regression model. Further, we demonstrate how mixed effects models can be used to explore the influence of environmental factors on the sample average growth curve. Analyzing data from 183 infant boys (aged 3-15 months) from rural South India, we show how maternal education shapes infant growth patterns as early as within the first 6 months of life. The presented analyses highlight the utility of mixed effects models for analyzing serial growth data because they allow researchers to simultaneously predict individual curves, estimate sample average curves, and investigate the effects of environmental exposure variables. PMID:23283665

Johnson, William; Balakrishna, Nagalla; Griffiths, Paula L

2013-01-01

198

Lightning Return Stroke Models

We test the two most commonly used lightning return stroke models, Bruce-Golde and trammission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations

Y. T. Lin; M. A. Uman; R. B. Standler

1980-01-01

199

NSDL National Science Digital Library

This site contains 21 modular, easy to use economic models, that are appropriate for class assignments or in-class demonstrations. Students can simulate all the standard models taught in most economics courses. EconModel uses the Windows OS. The simulations were developed by William R. Parke of the University of North Carolina at Chapel Hill.

Blecha, Betty J.

200

Improving vulnerability discovery models

ABSTRACT Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnera- bility discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discov- ery process. I then describe the theoretical requirements

Andy Ozment

2007-01-01

201

NSDL National Science Digital Library

This worksheet compares user-input growth data with predictions under linear, exponential, and logistic models of growth. Students can input parameters for each model; the program graphs the results and computes a crude goodness-of-fit measure. Introduces concepts of modeling and statistical analysis that can be more thoroughly explored using standard statistics software (JMP, SAS, etc.)

Tony Weisstein (Truman State University;Biology)

2005-12-16

202

Modelling a Suspension Bridge.

ERIC Educational Resources Information Center

The quadratic function can be modeled in real life by a suspension bridge that supports a uniform weight. This activity uses concrete models and computer generated graphs to discover the mathematical model of the shape of the main cable of a suspension bridge. (MDH)

Rawlins, Phil

1991-01-01

203

Appendix W to 40CFR Part 51 (Guideline on Air Quality Models) specifies the models to be used for purposes of permitting, PSD, and SIPs. Through a formal regulatory process this modeling guidance is periodically updated to reflect current science. In the most recent action, thr...

204

NSDL National Science Digital Library

This model can be used to create a virtual population to observe how different factors might affect the spread of a disease. Scientists often use computer models to study complicated phenomena like epidemics. This model is a simplified simulation of any disease that is spread through human contact.

Shodor

205

. A Kripke model ? is a submodel of another Kripke model ? if ? is obtained by restricting the set of nodes of ?. In this paper we show that the class of\\u000a formulas of Intuitionistic Predicate Logic that is preserved under taking submodels of Kripke models is precisely the class\\u000a of semipositive formulas. This result is an analogue

Albert Visser

2001-01-01

206

We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

Sharlemann, E.T.

1994-07-01

207

Independent Mathematical Modeling.

ERIC Educational Resources Information Center

Argues that a major difficulty in learning how to do mathematical modeling is in the first independent run through the modeling cycle. Reviews a case study (N=12) on mathematical modeling and presents the conclusions in three sections: (1) the choice of task; (2) the presentation of the task; and (3) tutor intervention and support. (ASK)

Smith, D. N.

1997-01-01

208

External Program Model Checking

To analyze larger models for model checking, external algo- rithms have shown considerable success in the verification of communi- cation protocols. This paper applies external model checking to software executables. The state in such a verification approach itself is very large, such that main memory hinders the analysis of larger state spaces and calls for I\\/O ecient exploration algorithms. We

Stefan Edelkamp; Shahid Jabbar; Dino Midzic; Daniel Rikowski; Damian Sulewski

209

In this study, we are concerned with the design of granular modeling being originally proposed by Pedrycz and Vasilakos. The enhancement of the development process comes in the form of the boosting mechanism applied to the generic model. In comparison with the original topology of the model studied so far, we augment it by a bias term and investigate its

Witold Pedrycz; Keun-Chang Kwak

2006-01-01

210

We present a model for treating solid boundaries of a DPD fluid. The basic idea is to model the stick boundary conditions by assuming that a layer of DPD particles is stuck on the boundary. By taking a continuum limit of this layer effective dissipative and stochastic forces on the fluid DPD particles are obtained. The boundary model is tested

M. Revenga; I. Zúñiga; P. Español; I. Pagonabarraga

1998-01-01

211

ERIC Educational Resources Information Center

A dynamical systems approach to energy balance models of climate is presented, focusing on low order, or conceptual, models. Included are global average and latitude-dependent, surface temperature models. The development and analysis of the differential equations and corresponding bifurcation diagrams provides a host of appropriate material for…

Walsh, Jim; McGehee, Richard

2013-01-01

212

Reexamining Community Corrections Models

Historically, community corrections has been based on models of diversion, advocacy, and reintegration. Increases in crime and more high-risk offenders being sentenced to probation have led to emphases on control and surveillance, and “just deserts,” adversary, and restitution models have replaced the original models. The author argues for strategies of internalization, reintegrative shaming, and victim-offender reconciliation for a comprehensive community

Richard Lawrence

1991-01-01

213

Proposes an efficient architecture for selective image modeling. The authors give an example in which models of different scale are reconstructed in parallel. It is shown that this redundant representation can effectively be pruned using the criterion of minimum description length. Models that are selected in the final description indicate the appropriate scale of observation

F. Solina; Ales Leonardis

1992-01-01

214

ERIC Educational Resources Information Center

Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)

Fitzsimmons, Charles P.

1986-01-01

215

National Technical Information Service (NTIS)

This paper describes the data model for POSTGRES, a next-generation extensible database management system being developed at the University of California StR86. The data model is a relational model that has been extended with abstract data types, data of ...

L. A. Rowe M. R. Stonebraker

1987-01-01

216

Crushed Salt Constitutive Model

The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.

Callahan, G.D.

1999-02-01

217

This report describes the MPP Fortran programming model which will be supported on the first phaseMPP systems. Based on existing and proposed standards, it is a work sharing model which combinesfeatures from existing models in a way that may be both efficiently implemented and useful.

Douglas M. Pase; Tom MacDonald; Andrew Meltzer

1992-01-01

218

GTM is an economic model capable of examining global forestry land-use, management, and trade responses to policies. In responding to a policy, the model captures afforestation, forest management, and avoided deforestation behavior. The model estimates harvests in industrial fore...

219

Mathematical models of hysteresis

A new approach to Preisach's hysteresis model, which emphasizes its phenomenological nature and mathematical generality, is briefly described. Then the theorem which gives the necessary and sufficient conditions for the representation of actual hysteresis nonlinearities by Preisach's model is proven. The significance of this theorem is that it establishes the limits of applicability of this model.

I. Mayergoyz

1986-01-01

220

NSDL National Science Digital Library

In this activity, candy models are used to demonstrate the features of the Earth, including its internal structure and layers. Students learn why models are essential in Earth science and answer questions about how their candy models do and do not compare with the actual Earth.

Ladue, Nicole

221

This paper describes the progress that has been made by the authors in developing a device and a technique for power system load modelling. The applications developed under the FACTS initiative and the system identification technique using the correlation method with pseudo random sequences (PRS) have been adopted for signal injection and load modelling. Simulation studies for load modelling using

Y. Wang; N. C. Pahalawaththa

1998-01-01

222

National Technical Information Service (NTIS)

A study of circulation in the Alboran Sea begins by using simplest model capable of simulating major features of the circulation. This is a reduced gravity model in a semi-enclosed rectangular domain. It is essentially a model of the first baroclinic mode...

R. H. Preller

1983-01-01

223

We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors

Timothy F. Cootes; Gareth J. Edwards; Christopher J. Taylor

2001-01-01

224

We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.

Timothy F. Cootes; Gareth J. Edwards; Christopher J. Taylor

1998-01-01

225

One of the environmental and economic models that the U.S. EPA uses to assess climate change policies is the Second Generation Model (SGM). SGM is a 13 region, 24 sector computable general equilibrium (CGE) model of the world that can be used to estimate the domestic and intern...

226

This paper characterizes the behavior of observed asset prices under price limits and proposes the use of two-limit truncated and Tobit regression models to analyze regression models whose dependent variable is subject to price limits. Through a proper arrangement of the sample, these two models, the estimation of which is easy to implement, are applied only to subsets of the

Pin-Huang Chou

1999-01-01

227

Overdispersed Generalized Linear Models.

National Technical Information Service (NTIS)

Generalized linear models have become a standard class of models for data analysts. However in some applications, heterogeneity in samples is too great to be explained by the simple variance function implicit in such models. Utilizing a two-parameter expo...

D. K. Dey A. E. Gelfand F. Peng

1994-01-01

228

Generalized Latent Trait Models.

ERIC Educational Resources Information Center

Discusses a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. Presents a unified maximum likelihood method for estimating the parameters of the generalized latent trait model and discusses the scoring of individuals on the latent dimensions.…

Moustaki, Irini; Knott, Martin

2000-01-01

229

NSDL National Science Digital Library

With this tool, students can explore different representations for fractions. They can create a fraction, selecting any numerator or denominator up to 20, and see a model of the fraction as well as its percent and decimal equivalents. For the model, they can choose either a circle, a rectangle, or a set model.

Illuminations, Nctm

2000-01-01

230

Two Cognitive Modeling Frontiers

NASA Astrophysics Data System (ADS)

This paper reviews three hybrid cognitive architectures (Soar, ACT-R, and CoJACK) and how they can support including models of emotions. There remain problems creating models in these architectures, which is a research and engineering problem. Thus, the term cognitive science engineering is introduced as an area that would support making models easier to create, understand, and re-use.

Ritter, Frank E.

231

PRZM3 is a modeling system that links two subordinate models - PRZM and VADOFT to predict pesticide transport and transformation down through the crop root and unsaturated zone. PRZM3 includes modeling capabilities for such phenomena as soil temperature simulation, vo...

232

ERIC Educational Resources Information Center

In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

Bogiages, Christopher A.; Lotter, Christine

2011-01-01

233

Vestibular postural control model

Current models for physiological components and a posture control experiment conducted with three normal subjects form the basis for a model which seeks to describe quantitatively the control of body sway when only vestibular motion cues are used. Emphasis is placed on delineating the relative functional roles of the linear and the angular acceleration sensors and on modeling the functional

Lewis M. Nashner

1972-01-01

234

In this article we put forward a Bayesian approach for finding classification and regression tree (CART) models. The two basic components of this approach consist of prior specification and stochastic search. The basic idea is to have the prior induce a posterior distribution that will guide the stochastic search toward more promising CART models. As the search proceeds, such models

Hugh A. Chipman; Edward I. George; Robert E. McCulloch

1998-01-01

235

NSDL National Science Digital Library

This is an activity about scale model building. Learners will use mathematics to determine the scale model size, construct a pattern, and build a paper scale model of the IMAGE (Imager for Magnetopause-to-Aurora Global Exploration) satellite, the first satellite mission to image the Earth's magnetosphere. This is the second activity in the Solar Storms and You: Exploring Satellite Design educator guide.

236

Motivation: Statistical sequence comparison techniques, such as hidden Markov models and generalized profiles, calculate the probability that a sequence was generated by a given model. Log-odds scoring is a means of evaluating this probability by comparing it to a null hypothesis, usually a simpler statistical model intended to represent the universe of sequences as a whole, rather than the group

Christian Barrett; Richard Hughey; Kevin Karplus

1997-01-01

237

ERIC Educational Resources Information Center

Describes types of molecular models (ball-and-stick, framework, and space-filling) and evaluates commercially available kits. Gives instructions for constructive models from polystyrene balls and pipe-cleaners. Models are useful for class demonstrations although not sufficiently accurate for research use. Illustrations show biologically important…

Goodman, Richard E.

1970-01-01

238

Most common approaches to predicting or documenting seedling emergence are imprecise. Mechanistic models that simulate seed dormancy and germination and seedling elongation as functions of measured or estimated environmental variables seem to be the most promising approach to the problem, but they also are the most difficult models to develop. These models will need to integrate soil water potential and

Frank Forcella; Roberto L. Benech Arnold; Rudolfo Sanchez; Claudio M. Ghersa

2000-01-01

239

General Graded Response Model.

ERIC Educational Resources Information Center

This paper describes the graded response model. The graded response model represents a family of mathematical models that deal with ordered polytomous categories, such as: (1) letter grading; (2) an attitude survey with "strongly disagree, disagree, agree, and strongly agree" choices; (3) partial credit given in accord with an individual's degree…

Samejima, Fumiko

240

Computational Models of Neuromodulation

Computational modeling of neural substrates provides an excellent the- oretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodula- tion in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational

Jean-marc Fellous; Christiane Linster

1998-01-01

241

The problem of equilibrium states and/or ground states of exactly solvable homogeneous boson models is stated and explicitly proved as a special case of the general variational problem of statistical mechanics in terms of quasifree states. We apply the result to a model of super-radiant Bose-Einstein condensation and to the pairing boson model.

Pule, Joe V. [School of Mathematical Sciences, University College Dublin, Belfield, Dublin 4 (Ireland); Verbeure, Andre F. [Instituut voor Theoretische Fysica, K.U. Leuven, B-3000 Leuven (Belgium); Zagrebnov, Valentin A. [Universite de la Mediterranee Centre de Physique Theorique-UMR 6207, Luminy-Case 907, 13288 Marseille, Cedex 09 (France)

2008-04-15

242

Textile composites: modelling strategies

Textile materials are characterised by the distinct hierarchy of structure, which should be represented by a model of textile geometry and mechanical behaviour. In spite of a profound investigation of textile materials and a number of theoretical models existing in the textile literature for different structures, a model covering all structures typical for composite reinforcements is not available. Hence the

S. V. Lomov; G. Huysmans; Y. Luo; R. S. Parnas; A. Prodromou; I. Verpoest; F. R. Phelan

2001-01-01

243

On the basis of new representations of the projective group, we construct some new dual quark models whose spin and internal symmetry are not multiplicative. One model is a factorized theory of exotic states with broken exchange degeneracy, ninth mesons being optional. The exotic states are suppressed three units below the Pomeranchon. In another model, with spin-orbit coupling and curved

K. Bardakci; M. B. Halpern

1971-01-01

244

Biophysical and spectral modeling

NASA Technical Reports Server (NTRS)

Activities and results of a project to develop strategies for modeling vegetative canopy reflectance are reported. Specific tasks included the inversion of canopy reflectance models to estimate agronomic variables (particularly leaf area index) from in-situ reflectance measurements, and a study of possible uses of ecological models in analyzing temporal profiles of greenness.

Goel, N. S. (principal investigator)

1982-01-01

245

National Technical Information Service (NTIS)

The model of relativistic spin particle is considered in (2+1)-dimensions. It is shown that there are two bosonic or two fermionic states of spins s = +-(alpha) in the quantum spectrum of the model when the parameter of the model (alpha) is respectively i...

M. S. Plyushchaj

1990-01-01

246

Psychometric Latent Response Models.

ERIC Educational Resources Information Center

Some psychometric models are presented that belong to the larger class of latent response models (LRMs). Following general discussion of LRMs, a method for obtaining maximum likelihood and some maximum "a posteriori" estimates of the parameters of LRMs is presented and applied to the conjunctive Rasch model. (SLD)

Maris, Eric

1995-01-01

247

This paper presents initial results in model checking multi-threaded Java programs. Java programs are translated into the SAL (Symbolic Analysis Labo- ratory) intermediate language, which supports dy- namic constructs such as object instantiations and thread call stacks. The SAL model checker then ex- haustively checks the program description for dead- locks and assertion failures. Basic model check- ing optimizations that

David Y. W. Park; Jens U. Skakkebaek

2000-01-01

248

Physical Modeling Synthesis Update

Recent research in physical modeling of musical instruments for purposes of sound synthesis is reviewed. Recent references, results, and outstanding problems are highlighted for models of strings, winds, brasses, percussion, and acoustic spaces. Emphasis is placed on digital waveguide models and the musical acoustics research on which they are based.

Julius O. Smith

1996-01-01

249

Continuous Cellular Automata Model

NSDL National Science Digital Library

The Continuous Cellular Automata Model creates a one-dimensional array of rational numbers Xi and applies a rule that determines how each cell evolves from one generation to the next. The numerators and denominators of numbers Xi are stored as arbitrary precision Big Integer objects so all computations are exact.Â Because arbitrary precision arithmetic is computationally intensive, we use the Parallel Java library to implement the model. The Continuous Cellular Automata Model was developed by Wolfgang Christian using the Easy Java Simulations (Ejs) modeling tool. It is based on the One-Dimensional Continuous Cellular Automata model in Chapter 17 of the book "Building Parallel Programs" by Alan Kaminsky.

Christian, Wolfgang

2012-04-03

250

The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.

1984-03-01

251

The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.

Brown, T. W. [DESY, Hamburg, Theory Group, Notkestrasse, 85, D-22603 Hamburg (Germany)

2011-04-15

252

NSDL National Science Digital Library

The purpose of this web site is to facilitate the development and testing of the Terrain-following Ocean Modeling System (TOMS) and to provide a forum to the ocean community at large. The site provides an explanation of three-dimensional modeling, as well as an overview of the four primary types of ocean modeling methods currently in use and links to labs around the country using these modeling techniques. A collection of links to freely downloadable ocean modeling tools is provided. The site also includes links to data sources, publications, bulletin boards, chat rooms and other relevant sites.

Ocean-Modeling.org

253

NASA Astrophysics Data System (ADS)

An adaptive background model aiming at outdoor vehicle detection is presented in this paper. This model is an improved model of PICA (pixel intensity classification algorithm), it classifies pixels into K-distributions by color similarity, and then a hypothesis that the background pixel color appears in image sequence with a high frequency is used to evaluate all the distributions to determine which presents the current background color. As experiments show, the model presented in this paper is a robust, adaptive and flexible model, which can deal with situations like camera motions, lighting changes and so on.

Lu, Xiaochun; Xiao, Yijun; Chai, Zhi; Wang, Bangping

2007-11-01

254

NSDL National Science Digital Library

This lesson instructs students on how to read station models, the symbols used on weather maps to show data (temperature, wind speed and direction, barometeric pressure, etc.) for a given reporting station. It includes a diagram of a station model, an explanation of the data conveyed by the numbers and symbols, and a table of definitions for the graphic symbols used with models. There is also a set of interactive station models students can use for practice at interpretation, and an interactive exercise in which students use real-time weather data to interpret models.

255

Keloids and hypertrophic scars are thick, raised dermal scars, caused by derailing of the normal scarring process. Extensive research on such abnormal scarring has been done; however, these being refractory disorders specific to humans, it has been difficult to establish a universal animal model. A wide variety of animal models have been used. These include the athymic mouse, rats, rabbits, and pigs. Although these models have provided valuable insight into abnormal scarring, there is currently still no ideal model. This paper reviews the models that have been developed.

Seo, Bommie F.; Lee, Jun Yong; Jung, Sung-No

2013-01-01

256

NASA Astrophysics Data System (ADS)

Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.

Marion, Giles M.; Kargel, Jeffrey S.

257

In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.

Johnson, Douglas H.; Cook, R. D.

2013-01-01

258

Differential susceptibility epidemic models.

We formulate compartmental differential susceptibility (DS) susceptible-infective-removed (SIR) models by dividing the susceptible population into multiple subgroups according to the susceptibility of individuals in each group. We analyze the impact of disease-induced mortality in the situations where the number of contacts per individual is either constant or proportional to the total population. We derive an explicit formula for the reproductive number of infection for each model by investigating the local stability of the infection-free equilibrium. We further prove that the infection-free equilibrium of each model is globally asymptotically stable by qualitative analysis of the dynamics of the model system and by utilizing an appropriately chosen Liapunov function. We show that if the reproductive number is greater than one, then there exists a unique endemic equilibrium for all of the DS models studied in this paper. We prove that the endemic equilibrium is locally asymptotically stable for the models with no disease-induced mortality and the models with contact numbers proportional to the total population. We also provide sufficient conditions for the stability of the endemic equilibrium for other situations. We briefly discuss applications of the DS models to optimal vaccine strategies and the connections between the DS models and predator-prey models with multiple prey populations or host-parasitic interaction models with multiple hosts are also given. PMID:15614550

Hyman, James M; Li, Jia

2005-06-01

259

Animal models of atherosclerosis

In this mini-review several commonly used animal models of atherosclerosis have been discussed. Among them, emphasis has been made on mice, rabbits, pigs and non-human primates. Although these animal models have played a significant role in our understanding of induction of atherosclerotic lesions, we still lack a reliable animal model for regression of the disease. Researchers have reported several genetically modified and transgenic animal models that replicate human atherosclerosis, however each of current animal models have some limitations. Among these animal models, the apolipoprotein (apo) E-knockout (KO) mice have been used extensively because they develop spontaneous atherosclerosis. Furthermore, atherosclerotic lesions developed in this model depending on experimental design may resemble humans’ stable and unstable atherosclerotic lesions. This mouse model of hypercholesterolemia and atherosclerosis has been also used to investigate the impact of oxidative stress and inflammation on atherogenesis. Low density lipoprotein (LDL)-r-KO mice are a model of human familial hypercholesterolemia. However, unlike apo E-KO mice, the LDL-r-KO mice do not develop spontaneous atherosclerosis. Both apo E-KO and LDL-r-KO mice have been employed to generate other relevant mouse models of cardiovascular disease through breeding strategies. In addition to mice, rabbits have been used extensively particularly to understand the mechanisms of cholesterol-induced atherosclerosis. The present review paper details the characteristics of animal models that are used in atherosclerosis research.

Kapourchali, Fatemeh Ramezani; Surendiran, Gangadaran; Chen, Li; Uitz, Elisabeth; Bahadori, Babak; Moghadasian, Mohammed H

2014-01-01

260

NSDL National Science Digital Library

The Graphs and Tracks Model allows instructors to create custom models of a ball rolling on a track with a variable shape.Â This EJS model was inspired by the Graphs and Tracks program by David Trowbridge. Instructors set the heights of the track segments and the model displays the motion of the ball. Optional displays, including position and velocity graphs, energy graphs, and data tables, can be added depending on the learning goals for the activity. Documents can aslo be added to the model to provide student instructions or activities. The customized simulation is then saved as a new jar file that can be redistributed. The Graphs and Tracks Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file will run the program if Java is installed.

Christian, Wolfgang; Belloni, Mario

2012-05-30

261

NSDL National Science Digital Library

The EJS Oscillator Chain model shows a one-dimensional linear array of coupled harmonic oscillators with fixed ends. This model can be used to study the propagation of waves in a continuous medium and the vibrational modes of a crystalline lattice. The Ejs model shown here contains 31 coupled oscillators equally spaced within the interval [0, 2 pi] with fixed ends. Ejs Oscillator Chain model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_mech_osc_OscillatorChain.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for classical mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2008-11-06

262

NSDL National Science Digital Library

The Parallel Mandelbrot Set Model is a parallelization of the sequential MandelbrotSet model, which does all the computations on a single processor core. This parallelization is able to use a computer with more than one cores (or processors) to carry out the same computation, thus speeding up the process. The parallelization is done using the model elements in the Parallel Java group. These model elements allow easy use of the Parallel Java library created by Alan Kaminsky. In particular, the parallelization used for this model is based on code in Chapters 11 and 12 of Kaminsky's book Building Parallel Java. The Parallel Mandelbrot Set Model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_chaos_ParallelMandelbrotSet.jar file to run the program if Java is installed.

Franciscouembre

2011-11-24

263

Many different types of transportation models are used to model coal transportation by rail. To obtain realistic results, it is usually necessary to consider other modes in addition to rail and other commodities in addition to coal. For example, to know the potential bottlenecks on the rail system it is necessary to predict the total level of freight movement on the rail system. This requires modeling the movements of other commodities in addition to coal. To predict the levels of flows of both coal and non-coal commodities on the rail system, it is necessary to predict the share of total flows carried by rail. This requires accurate modeling of competing modes. To develop accurate rate models it is also necessary to have information on competing modes. This paper presents a collection of transportation models used to model the various aspects of coal transportation by rail and shows how they interact.

Tobin, R.L.

1982-01-01

264

In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

2008-09-01

265

NSDL National Science Digital Library

The Tracker video analysis and modeling program enables students to create simple particle model simulations based on Newton's laws and to compare their behavior directly with that of real-world objects captured on video. Tracker's "model builder" provides a gentle introduction to dynamic modeling by making it easy to define and modify force expressions, parameter values and initial conditions while hiding the numerical algorithm details. Because the model simulations synchronize with and draw themselves right on the video, students can test their models experimentally by direct visual inspection, a process that is both intuitive and discerning. This leads them to move from a paradigm of "problem solving" to one of "model building and testing" that reflects more closely the activities of professional physicists. Tracker is part of the Open Source Physics project. Tracker is available at or from the comPADRE Open Source Physics collection at . Partial funding was provided by NSF grant DUE-0442581.

Brown, Douglas

2010-08-11

266

Toward Scientific Numerical Modeling

NASA Technical Reports Server (NTRS)

Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

Kleb, Bil

2007-01-01

267

The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To asses the impacts of moisture on the ventilation efficiency.

V. Chipman; J. Case

2002-12-20

268

Model checking of healthcare domain models.

This paper shows the application of a type of formal software verification technique known as lightweight model checking to a domain model in healthcare informatics in general and public health surveillance systems in particular. One of the most complex use cases of such a system is checked using assertions to verify one important system property. This use case is one of the major justifications for the complexity of the domain model. Alloy Analyzer verification tool is utilized for this purpose. Such verification work is very effective in either uncovering design flaws or in providing guarantees on certain desirable system properties in the earlier phases of the development lifecycle of any critical project. PMID:19640605

Baksi, Dibyendu

2009-12-01

269

The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented and the methods necessary for achieving accurate and efficient solutions have been incorporated. The most important method is the getStress function where the actual material model evaluation takes place. Obviously, all material models incorporate this function. The initialize function is included in most material models. The initialize function is called once at the beginning of an analysis and its primary purpose is to initialize the material state variables associated with the model. Many times, there is some information which can be set once per load step. For instance, we may have temperature dependent material properties in an analysis where temperature is prescribed. Instead of setting those parameters at each iteration in a time step, it is much more efficient to set them once per time step at the beginning of the step. These types of load step initializations are performed in the loadStepInit method. The final function used by many models is the pcElasticModuli method which changes the moduli that are to be used by the elastic preconditioner in Adagio. The moduli for the elastic preconditioner are set during the initialization of Adagio. Sometimes, better convergence can be achieved by changing these moduli for the elastic preconditioner. For instance, it typically helps to modify the preconditioner when the material model has temperature dependent moduli. For many material models, it is not necessary to change the values of the moduli that are set initially in the code. Hence, those models do not have pcElasticModuli functions. All four of these methods receive information from the matParams structure as described by Scherzinger and Hammerand.

Hammerand, Daniel Carl; Scherzinger, William Mark

2007-09-01

270

Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling.

We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of evolutionary responses to climate change. PMID:17060200

Yesson, C; Culham, A

2006-10-01

271

NASA Technical Reports Server (NTRS)

The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.

Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol

2003-01-01

272

A unique end-to-end LIDAR sensor model has been developed supporting the concept development stage of the CALIOPE UV DIAL and UV laser-induced-fluorescence (LIF) efforts. The model focuses on preserving the temporal and spectral nature of signals as they pass through the atmosphere, are collected by the optics, detected by the sensor, and processed by the sensor electronics and algorithms. This is done by developing accurate component sub-models with realistic inputs and outputs, as well as internal noise sources and operating parameters. These sub-models are then configured using data-flow diagrams to operate together to reflect the performance of the entire DIAL system. This modeling philosophy allows the developer to have a realistic indication of the nature of signals throughout the system and to design components and processing in a realistic environment. Current component models include atmospheric absorption and scattering losses, plume absorption and scattering losses, background, telescope and optical filter models, PMT (photomultiplier tube) with realistic noise sources, amplifier operation and noise, A/D converter operation, noise and distortion, pulse averaging, and DIAL computation. Preliminary results of the model will be presented indicating the expected model operation depicting the October field test at the NTS spill test facility. Indications will be given concerning near-term upgrades to the model.

Gentry, S.; Taylor, J.; Stephenson, D.

1994-06-01

273

NASA Astrophysics Data System (ADS)

A unique end-to-end LIDAR sensor model has been developed supporting the concept development stage of the CALIOPE UV DIAL and UV laser-induced-fluorescence (LIF) efforts. The model focuses on preserving the temporal and spectral nature of signals as they pass through the atmosphere, are collected by the optics, detected by the sensor, and processed by the sensor electronics and algorithms. This is done by developing accurate component sub-models with realistic inputs and outputs, as well as internal noise sources and operating parameters. These sub-models are then configured using data-flow diagrams to operate together to reflect the performance of the entire DIAL system. This modeling philosophy allows the developer to have a realistic indication of the nature of signals throughout the system and to design components and processing in a realistic environment. Current component models include atmospheric absorption and scattering losses, plume absorption and scattering losses, background, telescope and optical filter models, PMT (photomultiplier tube) with realistic noise sources, amplifier operation and noise, A/D converter operation, noise and distortion, pulse averaging, and DIAL computation. Preliminary results of the model will be presented indicating the expected model operation depicting the October field test at the NTS spill test facility. Indications will be given concerning near-term upgrades to the model.

Gentry, S.; Taylor, J.; Stephenson, D.

274

Turbulence modeling and experiments

NASA Technical Reports Server (NTRS)

The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat flux equations for the buoyant plume.

Shabbir, Aamir

1992-01-01

275

NSDL National Science Digital Library

This applet tests the sensitivity of a barotropic model to time step, grid spacing, and initial conditions. The site explains the CFL (Courant-Friedrichs-Lewy) criterion (that the speed of fastest winds in the model must be less than or equal to grid spacing divided by the time step) and how a finite-difference weather prediction model blows up if this criterion is not met. The user of this applet will learn what the model looks like when it blows up, that a modeler cannot arbitrarily choose a horizontal grid spacing without also taking into account the time step of the model, and that if fine horizontal resolution is desired to see small-scale weather, there must be fine time resolution, too.

Ackerman, Steve; Whittaker, Tom

276

Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

2014-01-01

277

NASA Technical Reports Server (NTRS)

The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.

1985-01-01

278

NSDL National Science Digital Library

The EJS Falling Loop Model shows a conducting loop falling out of a uniform magnetic field. Users can change the size and orientation of the loop. If Ejs is installed, right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item allows for editing of the model. The Falling Loop model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_em_FallingLoop.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Cox, Anne

2009-09-24

279

Animal models of disease states are valuable tools for developing new treatments and investigating underlying mechanisms. They should mimic the symptoms and pathology of the disease and importantly be predictive of effective treatments. Fibromyalgia is characterized by chronic widespread pain with associated co-morbid symptoms that include fatigue, depression, anxiety and sleep dysfunction. In this review, we present different animal models that mimic the signs and symptoms of fibromyalgia. These models are induced by a wide variety of methods that include repeated muscle insults, depletion of biogenic amines, and stress. All potential models produce widespread and long-lasting hyperalgesia without overt peripheral tissue damage and thus mimic the clinical presentation of fibromyalgia. We describe the methods for induction of the model, pathophysiological mechanisms for each model, and treatment profiles.

2013-01-01

280

NSDL National Science Digital Library

The Morris-Lecar Model is an abstraction of the Hodgkin-Huxley Model that has two state variables: the voltage within the neuron and a potassium gating variable. This model gives three presets for parameters and allows the user to open a new window to see the phase space. By viewing the phase space and adjusting the parameters him- or herself, the user can gain a complete understanding of the dynamics of this neuron abstraction. The Morris-Lecar Model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file will run the program if Java is installed. You can modify this simulation if you have EJS installed by right-clicking within the map and selecting "Open Ejs Model" from the pop-up menu item.

Thomson, Colin F.

2012-09-18

281

Universality in sandpile models

NASA Astrophysics Data System (ADS)

A classification of sandpile models into universality classes is presented. On the basis of extensive numerical simulations, in which we measure an extended set of exponents, the Manna two-state model [S. S. Manna,

Ben-Hur, Asa; Biham, Ofer

1996-02-01

282

NASA Astrophysics Data System (ADS)

An extension of a basic model, initially made for sand modeling is developed with the object of using it for clays. The formal principles of elastoplasticity as well as the origin of plasticity surface forms that are commonly used for soils are reviewed. This basic model was applied to study the behavior of clay by cyclic torsion experiments. The results show that it is not possible to reproduce dilatancy appearance in undrained cyclic experiments if the clay is normally consolidated. An analysis on how to introduce a Cam-Clay type closed surface into the basic model is given. The consolidation mechanisms as well as the deviatoric mechanisms of the proposed model extension are described. The identification process of the parameters for the proposed model is summarized. An analytical process of identification from drained and undrained experiments is given. The analysis of the simulation results conducted is included.

Dubujet, Philippe

1992-07-01

283

Extended frequency turbofan model

NASA Technical Reports Server (NTRS)

The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

Mason, J. R.; Park, J. W.; Jaekel, R. F.

1980-01-01

284

NSDL National Science Digital Library

Students create a physical model illustrating soil water balance using drinking glasses to represent the soil column, and explain how the model can be used to interpret data and form predictions. Using data from the GLOBE Data Server, they calculate the potential evapotranspiration, average monthly temperatures and precipitation for their model. This is a learning activity associated with the GLOBE hydrology investigations and is supported by the Hydrology chapter of the GLOBE Teacher's Guide.

285

NSDL National Science Digital Library

The EJS Roller Coaster model explores the relationship between kinetic, potential, and total energy as a cart travels along a roller coaster. Users can create their own roller coaster curve and observe the resulting motion. The Roller Coaster model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file will run the program if Java is installed.

Gallis, Michael R.

2008-10-27

286

NSDL National Science Digital Library

The EJS Ceiling Bounce Model shows a ball launched by a spring-gun in a building with a very high ceiling and a graph of the ball's position or velocity as a function of time. Students are asked set the ball's inital velocity so that it barely touches the ceiling. This simple model isÂ a designed to teach both physics and EJS modeling.

Christian, Wolfgang

2008-12-16

287

The Surface-To-Air Missile (SAM) Electro-Magnetic-Pulse (EMP) (SEMP) model simulates the illumination of an entire SAM brigade with an EMP weapon. It computes probability distributions of SAM brigade performance levels after an EMP attack has occurred. Brigade performance is determined by the combination of components that survive the EMP. Accordingly, the SEMP model is separated into the component failure model and

Thatcher

1984-01-01

288

The Generic Modeling Environment

The Generic Modeling Environment (GME) is a con- figurable toolset that supports the easy creation of d o- main-specific modeling and program synthesis environ- ments. The primarily graphical, domain-specific models can represent the application and its environment includ- ing hardware resources, and their relationship. The mod- els are then used to automatically synthesize the applica- tion and\\/or generate inputs to

Akos Ledeczi; Miklos Maroti; Arpad Bakay; Gabor Karsai

2001-01-01

289

NASA Technical Reports Server (NTRS)

The goals of the Integrated Medical Model (IMM) are to develop an integrated, quantified, evidence-based decision support tool useful to crew health and mission planners and to help align science, technology, and operational activities intended to optimize crew health, safety, and mission success. Presentation slides address scope and approach, beneficiaries of IMM capabilities, history, risk components, conceptual models, development steps, and the evidence base. Space adaptation syndrome is used to demonstrate the model's capabilities.

Kerstman, Eric; Minard, Charles; Saile, Lynn; Freiere deCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David

2010-01-01

290

Making Mendel's Model Manageable

NSDL National Science Digital Library

Genetics is often a fascinating but difficult subject for middle level students. This engaging activity presents an approach that helps students understand how genotypes can translate into phenotypes using Gummi Bears and Gummi Dolphins to solve problems using Mendel's model, and then revising the model as necessary. Developing a model gives students a sense of how science works and how data translate into scientific ideas.

Mesmer, Karen

2006-01-01

291

NSDL National Science Digital Library

The Thin Film Interference model investigates reflection and transmission of light through a thin film. The user can change the thickness and index of refraction of the thin film as well as the incident light wavelength. The Thin Film Interference Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_ThinFilm.jar file will run the program if Java is installed.

Duffy, Andrew

2010-04-25

292

The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST.

Engel, D.W.; McGrail, B.P.

1993-11-01

293

Generating Gowdy cosmological models

NASA Astrophysics Data System (ADS)

Using the analogy with stationary axisymmetric solutions, we present a method to generate new analytic cosmological solutions of Einstein's equation belonging to the class of T3 Gowdy cosmological models. We show that the solutions can be generated from their data at the initial singularity and present the formal general solution for arbitrary initial data. We exemplify the method by constructing the Kantowski-Sachs cosmological model and a generalization of it that corresponds to an unpolarized T3 Gowdy model.

Sánchez, Alberto; Macías, Alfredo; Quevedo, Hernando

2004-05-01

294

NASA Technical Reports Server (NTRS)

Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.

Levison, W. H.; Baron, S.

1984-01-01

295

Evidence-based models and frameworks have been introduced to support diabetes self-management education and support. This\\u000a article presents various frameworks and models and describes their use in support of diabetes education at the patient–educator,\\u000a the practice environment, and the systems\\/policy\\/environmental level. The text and tables present various models and specific\\u000a recommendations and examples for educators to use at every level. Crosscutting

Linda M. Siminerio

296

Solid model design simplification

This paper documents an investigation of approaches to improving the quality of Pro/Engineer-created solid model data for use by downstream applications. The investigation identified a number of sources of problems caused by deficiencies in Pro/Engineer`s geometric engine, and developed prototype software capable of detecting many of these problems and guiding users towards simplified, useable models. The prototype software was tested using Sandia production solid models, and provided significant leverage in attacking the simplification problem.

Ames, A.L.; Rivera, J.J.; Webb, A.J.; Hensinger, D.M.

1997-12-01

297

NSDL National Science Digital Library

The Spherical Mirror Model demonstrates the focusing of light using a spherical mirror. The user can change the size and position of the object, the focal length of the mirror and the rays shown in the diagram. The Spherical Mirror Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_Spherical_Mirror.jar file will run the program if Java is installed.

Duffy, Andrew

2010-04-25

298

Modeling Carbon Dioxide Levels

NSDL National Science Digital Library

In this activity students will explore levels of Carbon Dioxide ( C02) in the atmosphere over time. There is concern that levels of C02 are rising; and finding a good mathematical model for CO2 levels is an important part of determining if this is attributable to human technology. Students draw a scatter plot, choose two points to create a linear model for the data, then use the model to make predictions.

2009-01-01

299

Atmospheric prediction model survey

NASA Technical Reports Server (NTRS)

As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

Wellck, R. E.

1976-01-01

300

Global Atmospheric Aerosol Modeling

NASA Technical Reports Server (NTRS)

Global aerosol models are used to study the distribution and properties of atmospheric aerosol particles as well as their effects on clouds, atmospheric chemistry, radiation, and climate. The present article provides an overview of the basic concepts of global atmospheric aerosol modeling and shows some examples from a global aerosol simulation. Particular emphasis is placed on the simulation of aerosol particles and their effects within global climate models.

Hendricks, Johannes; Aquila, Valentina; Righi, Mattia

2012-01-01

301

NSDL National Science Digital Library

In this activity, learners simulate the behavior of the atmosphere. Learners working in groups of four will represent "cells" of the model (Earth, lower atmosphere, upper atmosphere, and space) and exchange energy with each other. Learners will observe how temperature fluctuates in the model. Use this activity to introduce learners to the inner-workings of the atmosphere as well as how scientists use models to understand abstract phenomena.

University, Colorado S.

2009-01-01

302

NSDL National Science Digital Library

In this activity, learners build a miniature wind tunnel to measure force. Learners construct the model out of Lexan plastic, a fan, and a precise digital scale. When wind pushes against a model car, a beam (hacked out of the digital scale) measures the force, which is very close to the actual drag caused by the car. Learners can use this tool to help prepare for a Pinewood Derby or model car race, or to learn about wind forces and turbulence.

Desrochers, Douglas

2011-01-01

303

Deterministic implied volatility models

In this paper, we characterize two deterministic implied volatility models, defined by assuming that either the per-delta or the per-strike implied volatility surface has a deterministic evolution. Practitioners have recently proposed these two models to describe two regimes of implied volatility (see Derman (1999 Risk4 55–9)). In an arbitrage-free sticky-delta model, we show that the underlying asset price is the

P. Balland

2002-01-01

304

NSDL National Science Digital Library

Created by David Hestenes of Arizona State University's modeling instruction and software development research, this site is an approach to reform of curriculum design and teaching methodology has been guided by a Modeling Theory of Physics Instruction. In addition, the site also provides modeling instructions for biology, chemistry and physical science. External links are provided under the heading "Opportunities for Professional Growth." These allow teachers the opportunity for explore workshops and other programs which can help enhance their teaching/curriculum.

Hestenes, David; Jackson, Jane

2009-04-20

305

HOMER® Micropower Optimization Model

NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

Lilienthal, P.

2005-01-01

306

Radiation Environment Modeling for Spacecraft Design: New Model Developments

NASA Technical Reports Server (NTRS)

A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

2006-01-01

307

Risk Modelling and Reasoning in Goal Models

In software engineering, risks are usually considered and analysed during, or even after, the design of the sys- tem. This approach can lead to the problem of accommo- dating necessary countermeasures in an existing design and possible to reconsider the initial requirements of the sys- tem. In this paper, we propose a goal-oriented approach for modelling and reasoning about risks

Yudistira Asnar; Paolo Giorgini; John Mylopoulos

308

Analytic Trajectory Visualizer Model

NSDL National Science Digital Library

The Analytic Animator allows instructors to create two-dimensional single-particle kinematics models for teaching. Instructors set two functions, x(t) and y(t), and the model displays the position-space particle motion as well as position, velocity, and acceleration graphs and tables. The customized simulation is then saved with associated curricular as a new jar file that can be redistributed. The Analytic Trajectory Animator Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_newton_TabletopProjectile.jar file will run the program if Java is installed.

Christian, Wolfgang; Belloni, Mario

2012-05-15

309

NSDL National Science Digital Library

The EJS Oscillator Chain model shows a one-dimensional linear array of coupled harmonic oscillators with fixed ends. This model can be used to study the propagation of waves in a continuous medium and the vibrational modes of a crystalline lattice. The Ejs model shown here contains 31 coupled oscillators equally spaced within the interval [0, 2 pi] with fixed ends. The Oscillator Chain JS Model was developed using the Easy Java Simulations (Ejs) version 5. It is distributed as a ready-to-run html page and requires only a browser with JavaScript support.

Christian, Wolfgang

2013-08-25

310

NASA Astrophysics Data System (ADS)

Within the set of generalized Skyrme models, we identify a submodel which has both infinitely many symmetries and a Bogomolny bound which is saturated by infinitely many exact soliton solutions. Concretely, the submodel consists of the square of the baryon current and a potential term only. Further, already on the classical level, this BPS Skyrme model reproduces some features of the liquid drop model of nuclei. Here, we review the properties of the model and we discuss the semiclassical quantization of the simplest Skyrmion (the nucleon).

Adam, C.; Sanchez-Guillen, J.; Wereszczynski, A.

2011-03-01

311

312

NSDL National Science Digital Library

Modeling a Changing World written by mathematics professor Tim Chartier and his student Nick Dovidio presents curricular material in an OSP Launcher package to motivate the need for numerically solving ordinary differential equations. The package discusses such applications as a mass-spring system and its connection to computer simulation for movies. An interactive model that simulates a two-body gravitational model of the moon and earth allows for exploring the topic of numerical error. Other models explore topics that include slope fields, numerical integration and numerical solvers for ordinary differential equations.

Chartier, Tim

2008-09-19

313

NSDL National Science Digital Library

The Ferris Wheel Model explores the amusement park ride modeled after Ferris' original wheel. The simulation shows a wheel that can be varied in radius from 40 m (Ferris' original wheels) to 100 m, or about 10 meters langer than the current world record. In addition, the rotational speed of the wheel can be varied from 10 - 20 m/s. By selecting the checkbox, the free-body diagram can be shown. The Ferris Wheel Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the jar file will run the program if Java is installed.

Belloni, Mario

2012-01-23

314

Outdoor ground impedance models.

Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values. PMID:21568385

Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram

2011-05-01

315

Interference Model: Ripple Tank

NSDL National Science Digital Library

The Interference Model: Ripple Tank investigates constructive and destructive interference between two point sources. The user can change the point source frequency, location and separation and phase difference between the point sources. The model also shows the difference in distance from the point sources to a movable observation point. The Interference Model: Ripple Tank was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_Ripple_Tank_Interference.jar file will run the program if Java is installed.

Duffy, Andrew

2010-04-25

316

Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.

Brown-VanHoozer, S. A.

1999-06-02

317

NASA Technical Reports Server (NTRS)

An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

Briggs, Hugh C.

2008-01-01

318

Models of holographic superconductivity

We construct general models for holographic superconductivity parametrized by three couplings which are functions of a real scalar field and show that under general assumptions they describe superconducting phase transitions. While some features are universal and model independent, important aspects of the quantum critical behavior strongly depend on the choice of couplings, such as the order of the phase transition and critical exponents of second-order phase transitions. In particular, we study a one-parameter model where the phase transition changes from second to first order above some critical value of the parameter and a model with tunable critical exponents.

Aprile, Francesco [Institute of Cosmos Sciences and Estructura i Constituents de la Materia Facultat de Fisica, Universitat de Barcelona, Avenida Diagonal 647, 08028 Barcelona (Spain); Russo, Jorge G. [Institute of Cosmos Sciences and Estructura i Constituents de la Materia Facultat de Fisica, Universitat de Barcelona, Avenida Diagonal 647, 08028 Barcelona (Spain); Institucio Catalana de Recerca i Estudis Avancats (ICREA), Paseo Lluis Companys, 23, 08010 Barcelona (Spain)

2010-01-15

319

NASA Technical Reports Server (NTRS)

The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

Sapyta, Joe; Reid, Hank; Walton, Lew

1993-01-01

320

NSDL National Science Digital Library

The Central Force JavaScript Model computes the trajectory of a particle acted on by a central force.Â The model reads uses a JavaScript mathematical expression parser to read the force and a adaptive step Runge-Kutta 5(4) algorithm to compute the trajectory.Â This model is designed to test the speed of the JS parser and the accuracy of the EJS JavaScript ODE solver. The Central Force JS Model was developed using the Easy Java Simulations (EJS) version 5. It is distributed as a ready-to-run html page and requires only a browser with JavaScript support.

Christian, Wolfgang

2013-09-01

321

Space station parametric models

NASA Technical Reports Server (NTRS)

The development of two parametric models for a four-panel planar initial space station is described. The derivations of the distributed parameter model are presented in detail with the hope that the same method and procedures can be employed for stations with different configurations or for changes within the same configuration class. The 19-DOF finite-element model is also described. With the availability of the 19-DOF and a lower-DOF space station models, the frequency characteristics of the various dynamical systems in the space station environment are identified.

Hamidi, M.; Wang, S. J.

1985-01-01

322

[Simple artificial mouth model].

A simple artificial mouth model is established under our laboratory condition. Development of monobacterial plaque and mixed bacterial plaque was studied in this artificial mouth model. The samples were subjected to viable count, microhardness measurement, etc. The result showed that the controlled conditions of the model can be used to study plaque development and earlier enamel lesion production on a time-dependent basis. It is concluded that the simple artificial mouth model is suitable for a wide range of dental applications. PMID:9387546

Zhu, M; Liu, Z; Li, M

1996-03-01

323

Atmospheric transport modeling

Predictions or estimates of the transport of materials in the atmosphere following an accident depend upon knowledge of the wind field in the vicinity of the accident site. Wind field models range in sophistication from simple straigth-line transport models to approximate solutions of the full set of Navier-Stokes equations in three dimensions. The basic types of models in order of increasing sophistication include: uniform wind field; area of influence; empirical interpolation; objective analysis; simplified physics; and full physics (primative equation). With increased sophistication comes also increased memory, computational time, and data input requirements. Each of the six types of models are discussed in detail. (ERB)

Ramsdell, J.V.

1981-11-01

324

A locally nonequilibrium model of superdiffusion is proposed that is based on the partition of the set of diffusing particles into groups according to the flight length of these particles. The process of diffusion is described in terms of partial concentrations of particles belonging to different groups. As special limit cases, the model yields equations with fractional time derivative and the so-called porous medium equation. The basic equations of the model are Markov equations; therefore, they easily include reaction terms. The model can be applied to describing the types of diffusion in which the diffusing particles are in free flight most of the time.

Shkilev, V. P. [National Academy of Sciences of Ukraine, Institute of Surface Chemistry (Ukraine)], E-mail: shkilevv@ukr.net

2008-11-15

325

Modelling approaches in biomechanics.

Conceptual, physical and mathematical models have all proved useful in biomechanics. Conceptual models, which have been used only occasionally, clarify a point without having to be constructed physically or analysed mathematically. Some physical models are designed to demonstrate a proposed mechanism, for example the folding mechanisms of insect wings. Others have been used to check the conclusions of mathematical modelling. However, others facilitate observations that would be difficult to make on real organisms, for example on the flow of air around the wings of small insects. Mathematical models have been used more often than physical ones. Some of them are predictive, designed for example to calculate the effects of anatomical changes on jumping performance, or the pattern of flow in a 3D assembly of semicircular canals. Others seek an optimum, for example the best possible technique for a high jump. A few have been used in inverse optimization studies, which search for variables that are optimized by observed patterns of behaviour. Mathematical models range from the extreme simplicity of some models of walking and running, to the complexity of models that represent numerous body segments and muscles, or elaborate bone shapes. The simpler the model, the clearer it is which of its features is essential to the calculated effect.

Alexander, R McN

2003-01-01

326

Western Research Institute (wRI) has developed a numerical model (TCROW) to describe CROW{sup TM} processing of contaminated aquifers. CROW is a patented technology for the removal of contaminant organics from water-saturated formations by injection of hot water or low- temperature steam. TCROW is based on a fully implicit, thermal, compositional model (TSRS) previously developed by wRI. TCROW`s formulation represents several enhancements and simplifications over TSRS and results in a model specifically tailored to model the CROW process.

NONE

1997-04-01

327

Eliciting information for product modeling using process modeling

A product model is a formal and structured definition of product information. The most common procedure for defin- ing a product data model is to first describe the business and\\/or engineering process in a formal process model, then to create a product data model based on the process model. However, there is a logical gap between process modeling and product

Ghang Lee; Charles M. Eastman; Rafael Sacks

2007-01-01

328

Bayesian Data-Model Fit Assessment for Structural Equation Modeling

ERIC Educational Resources Information Center

Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

Levy, Roy

2011-01-01

329

NASA Astrophysics Data System (ADS)

The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin foam models which are not symmetric with respect to one or more moves we consider.

Bahr, Benjamin; Hellmann, Frank; Kami?ski, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

2011-05-01

330

Biosphere Process Model Report

To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor. Collectively, the potential human receptor and exposure pathways form the biosphere model. More detailed technical information and data about potential human receptor groups and the characteristics of exposure pathways have been developed in a series of AMRs and Calculation Reports.

J. Schmitt

2000-05-25

331

"Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance. PMID:12024761

Magretta, Joan

2002-05-01

332

Modelling in the Mathematics Classroom.

ERIC Educational Resources Information Center

Describes an experience with modeling as a teaching technique in the mathematics classroom as opposed to mathematical modeling. Offering models in the mathematics classroom is a good idea. Presents fundamental ideas for creating an effective learning environment with models. (WRM)

Lee, Clare

2000-01-01

333

A geometrical model (GM) featuring a visualizable reduction of the elementary particles and interactions down to common elements has been developed. As a consequence, a taxonomy of particles and various interactions emerge, all in consonance with the Standard Model (SM) of particle physics. However, the GM goes well beyond the SM, incorporating a number of fundamental phenomena and issues for

J. S. Avrin

2001-01-01

334

ERIC Educational Resources Information Center

For the past five summers, the authors have taught summer school to recent immigrants and refugees. Their experiences with these fourth-grade English language learners (ELL) have taught them the value of using models to build scientific and mathematical concepts. In this article, they describe the use of different forms of 2- and 3-D models to…

Weinburgh, Molly; Silva, Cecilia

2011-01-01

335

NSDL National Science Digital Library

Lesson is designed to introduce students to cranial nerves through the use of an introductory lecture. Students will then create a three-dimensional model of the cranial nerves. An information sheet will accompany the model in order to help students learn crucial aspects of the cranial nerves.

Juliann Garza (University of Texas-Pan American Physician Assistant Studies)

2010-08-16

336

Multiplying Fractions (Area Model)

NSDL National Science Digital Library

In this teaching idea, students will learn how to use the area model to find the product when two fractions are multiplied. NOTE: Click the Download link on the right side of the screen to display the lesson without ads and to view the graphic example of the model.

Page, Audrey P.

2012-04-22

337

Reliability model generator specification

NASA Technical Reports Server (NTRS)

The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

Cohen, Gerald C.; Mccann, Catherine

1990-01-01

338

COMMUTER EXPOSURE MODELING METHODOLOGIES

Two methodologies for modeling commuter exposures are proposed: computer-oriented approach and a manual approach. Both modeling methodologies require that major commuter routes, or pathways, be identified and that the traffic on the remainder of the roadway network be treated as ...

339

LONGPRO Stream Modeling Exercise

NSDL National Science Digital Library

The purpose of this exercise is to integrate modeling with field data. The activity includes links to a "virtual field trip" of maps and photographs. Data from a creek is included in the field trip and students use an Excel spreadsheet model to analyze the data.

Locke, Bill

340

NSDL National Science Digital Library

In this activity, create a model of our lungs. Using simple everyday materials, construct a model that demonstrates how when you breathe, your lungs and diaphragm fill with air and expand. This activity guide includes a step-by-step instructional video.

Center, Saint L.

2014-02-03

341

Legal Policy Optimizing Models

ERIC Educational Resources Information Center

The use of mathematical models originally developed by economists and operations researchers is described for legal process research. Situations involving plea bargaining, arraignment, and civil liberties illustrate the applicability of decision theory, inventory modeling, and linear programming in operations research. (LBH)

Nagel, Stuart; Neef, Marian

1977-01-01

342

Structural Equation Model Trees

ERIC Educational Resources Information Center

In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

2013-01-01

343

NSDL National Science Digital Library

In this activity, learners make a model of a power plant that uses steam. Learners use simple materials like foil, a tin can, and a pot of water to model a geothermal power plant. Learners use a pinwheel to observe the power produced by the steam. SAFETY NOTE: Adult assistance required.

Commission, California E.

2006-01-01

344

Introduction to Theoretical Modelling

NASA Astrophysics Data System (ADS)

We briefly overview commonly encountered theoretical notions arising in the modelling of quantum gases, intended to provide a unified background to the `language' and diverse theoretical models presented elsewhere in this book, and aimed particularly at researchers from outside the quantum gases community.

Davis, Matthew J.; Gardiner, Simon A.; Hanna, Thomas M.; Nygaard, Nicolai; Proukakis, Nick P.; Szyma?ska, Marzena H.

2013-02-01

345

AGRICULTURAL SIMULATION MODEL (AGSIM)

AGSIM is a large-scale econometric simulation model of regional crop and national livestock production in the United States. The model was initially developed to analyze the aggregate economic impacts of a wide variety issues facing agriculture, such as technological change, pest...

346

This book presents the papers given at a conference on computerized simulation. Topics considered at the conference included expert systems, modeling in electric power systems, power systems operating strategies, energy analysis, a linear programming approach to optimum load shedding in transmission systems, econometrics, simulation in natural gas engineering, solar energy studies, artificial intelligence, vision systems, hydrology, multiprocessors, and flow models.

Hanham, R.; Vogt, W.G.; Mickle, M.H.

1986-01-01

347

Model Organisms and Arabidopsis

NSDL National Science Digital Library

Lesson plan with three parts. First part is to get students thinking about the types of organism used by research scientists. The second describes characteristics of a model plant. The third part describes the Arabidopsis as a model plant for research scientists.

Dr. John Kowalski (Roanoke Valley GovernorÃÂs School for Science and Technology)

2007-07-16

348

A general overview of quark models of hadrons is presented. Experiment results and theoretical attempts to explain the data are discussed. Bag models and pion clouds, hadron masses and baryon magnetic moments, quark clustering and NN interactions, and the G/sub A//G/sub V/ ratio are topics included in this review. (AIP)

Lipkin, H.J.

1984-11-15

349

National Technical Information Service (NTIS)

The DSM algorithm solves the model-selection problem for a ventilator-management advisor (VMA). A VMA is a computer program that applies patient-specific models of physiology to interpret intensive-care unit (ICU) data and to predict the effects of altern...

G. W. Rutledge

1995-01-01

350

Automated Student Model Improvement

ERIC Educational Resources Information Center

Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

2012-01-01

351

Insect Models of Immunosenescence

For the past few decades invertebrates have been used extensively as models for understanding the general process of senescence\\u000a (see reviews by Partridge and Gems 2002 ; Grotewiel et al. 2005 ; Keller and Jemielity 2006 ; Houthoofd and Vanfleteren 2007\\u000a ) and since the 1920’s as models for understanding the genes, signaling pathways and cellular processes involved in innate

Jeff Leips

352

NASA Technical Reports Server (NTRS)

A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.

Knezovich, F. M.

1976-01-01

353

Numerical investigations to identify the physical processes responsible for the generation, evolution, and dissipation of oceanic thermal anomalies (OTA) were carried out using the numerical dynamic model of the North Pacific Experiment (Norpax). The Norpax model is based on time-integrations of the finite- difference forms of the primitive equations. It possesses an actual coastal configuration and 10 vertical layers, with

JOSEPH CHI KAN HUANG

1979-01-01

354

Computational color vision model

NASA Astrophysics Data System (ADS)

Previously a computational model of human color vision was described which simulates the main retinal and cortical processes involved in color perception and which makes predictions about responses to spatiochromatic stimuli. The emphasis from early on was on ensuring validation of the model as it developed, but its growing complexity combined with considerations of linking it to a multiscale contrast model made the development increasingly cumbersome. The model was therefore completely rewritten as a set of Khoros (Khoral Research Inc) utilities which provide user friendly access to the model and its components via the visual programming interface. This paper describes the details of Khoros implementation and presents examples of the quantitative predictions made by the model for different simulated psychophysical experiments including increment threshold, grating sensitivity and grating masking. Current areas of activity include examining different gain processes at different stages of the model and their implications as possible components of color constancy mechanisms, and the impact of different types of cortical demultiplexing processes on the predictions made by the model.

Moorhead, Ian R.

1998-07-01

355

We provide a new electromagnetic mass model admitting the Chaplygin gas equation of state. We investigate three specializations, the first characterized by a vanishing effective pressure, the second provided with a constant effective density and the third described by a constant effective pressure p0. For these specializations we will discuss the models assuming that sigma elambda\\/2 = sigma 0rs, where

I. Radinschi; F. Rahaman; M. Kalam; K. Chakraborty

2009-01-01

356

Abstract: We describe the Transferable Belief Model, a model for representing quantified beliefs based on belief functions. Beliefs can be held at two levels: 1) a credal level where beliefs are entertained and quantified by belief functions, 2) a pignistic level where beliefs can be used to make decisions and are quantified by probability functions. The relation between the belief

Philippe Smets; Robert Kennes

1994-01-01

357

We describe the Transferable Belief Model, a model for representing quantified beliefs based on belief functions. Beliefs can be held at two levels: 1) a credal level where beliefs are entertained and quantified by belief functions, 2) a pignistic level where beliefs can be used to make decisions and are quantified by probability functions. The relation between the belief function

Philippe Smets; Yen-teh Hsia; Alessandro Saffiotti; Robert Kennes; Elizabeth Umkehrer; E. Umkehren

1991-01-01

358

NASA Astrophysics Data System (ADS)

A ?NN model inspired by Quantum Chromodynamics is presented. The model gives an accurate fit to the most recent Arndt NN phase shifts up to 1 GeV and can be applied to study intermediate- and high-energy nuclear reactions.

Lee, T.-S. H.

1989-08-01

359

National Technical Information Service (NTIS)

This thesis introduces Hidden Process Models (HPMs). HPMs are a probabilistic time series model for data assumed to be generated by a set of processes, where each process is characterized by a unique spatial-temporal signature and a probability distributi...

R. A. Hutchinson

2009-01-01

360

DYNAMIC ESTUARY MODEL PERFORMANCE

Applications of the Dynamic Estuary Model (DEM) to both the Delaware and Potomac Estuaries by the Environmental Protection Agency during the 1970s are summarized and evaluated. Methods for calibrating, refining, and validating this model, and statistics for evaluating its perform...

361

ERIC Educational Resources Information Center

Noting that linguistics and the neurological sciences have developed independently, this paper presents a coordinated approach to man's understanding of language, cognition, and mind. A neurological model is developed following a discussion of the rationale of such an approach. Chapters include: (1) the relation of neurological evidence to models…

Whitaker, H. A.

362

National Technical Information Service (NTIS)

Two simplified models of the secondhand car market are developed. These are then used to show some of the effects that would follow from changes in the number of company cars, in the prices of new cars and in the running costs of cars. In one model all ca...

J. C. Tanner

1984-01-01

363

NSDL National Science Digital Library

In this two-part activity, learners explore the Earth and Sun's positions in relation to the constellations of the ecliptic with a small model. Then they extend to explore the motions of the Earth and the inner planets in a larger classroom-size model.

Observatory, Mcdonald

2011-01-01

364

NSDL National Science Digital Library

This lesson plan helps students understand multi-digit division by constructing area models. The included interactive provides two example problems and helps connect the model to the traditional long division algorithm. Students solve their own problems using graph paper and/or base-10 blocks and progress toward mental strategies.

2014-01-01

365

ERIC Educational Resources Information Center

Understanding antibody structure and function is difficult for many students. The rearrangement of constant and variable regions during antibody differentiation can be effectively simulated using a paper model. Describes a hands-on laboratory exercise which allows students to model antibody diversity using readily available resources. (PVD)

Baker, William P.; Moore, Cathy Ronstadt

1998-01-01

366

This review focuses upon an animal model of relapse. The basic model is to establish that a drug is functioning as a reinforcer. Drug is then replaced with vehicle, and responding is allowed to extinguish. Exteroceptive or interoceptive stimuli are then presented to determine whether behavior that was previously reinforced by drug would be reinstated. Experimental attention has been directed

Marilyn E. Carroll; Sandra D. Comer

1996-01-01

367

Presented is a study involving (i) theoretical modelling of the newly invented ‘GP’ Mechanical Thrombectomy Device (MTD) and (ii) laboratory verification of the theoretical modelling. For the latter, the MTD was placed in two positions (i) end mounted on a classic Seldinger catheter and (ii) embedded within the Seldinger catheter. Blood clots from porcine arteries supported in an arterial jig

G. Pearce; S. Alyas; N. D. Perkinson; J. H. Patrick

2008-01-01

368

Recalibrating Software Reliability Models

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Worse, we are not even in a position to be abl_ to decide a priori which of the many models is most suitable in a particular context. Our own r_ecent work has

Sarah Brocklehurst; P. Y. Chan; Bev Littlewood; John Snell

1990-01-01

369

This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the Photochemical Box Model (PBM). The PBM is a simple stationary single-cell model with a variable height lid designed to provide volume-integrated hour averages of O3 and other ph...

370

Stiff magnetofluid cosmological model

We investigate the behavior of the magnetic field in a cosmological model filled with a stiff perfect fluid in general relativity. The magnetic field is due to an electric current along the x axis. The behavior of the model when a magnetic field is absent is also discussed.

Bali, R.; Tyagi, A.

1988-05-01

371

ERIC Educational Resources Information Center

Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…

Eichinger, John

2005-01-01

372

Lithium battery thermal models

Thermal characteristics and thermal behavior of lithium batteries are important both for the batteries meeting operating life requirements and for safety considerations. Sandia National Laboratories has a broad-based program that includes analysis, engineering and model development. We have determined thermal properties of lithium batteries using a variety of calorimetric methods for many years. We developed the capability to model temperature

Daniel H Doughty; Paul C Butler; Rudolph G Jungst; E. Peter Roth

2002-01-01

373

ERIC Educational Resources Information Center

In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

Ifenthaler, Dirk; Seel, Norbert M.

2013-01-01

374

QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987). Q2K is similar to Q2E in the following respects: One dimensional. The channel is well-mixed vertically a...

375

ERIC Educational Resources Information Center

Models of state involvement in training child care providers are briefly discussed and the employers' role in training is explored. Six criteria for states that are taken as models are identified, and four are described. Various state activities are described for each criterion. It is noted that little is known about employer and other private…

Morgan, Gwen

376

VENTURI SCRUBBER PERFORMANCE MODEL

The paper presents a new model for predicting the particle collection performance of venturi scrubbers. It assumes that particles are collected by atomized liquid only in the throat section. The particle collection mechanism is inertial impaction, and the model uses a single drop...

377

Computer modeling of detonators

A mathematical model of detonators which describes the resistance of the exploding bridgewire or exploding foil initiator as a function of energy deposition will be described. This model includes many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken within Sandia National Laboratories

C. M. Furnberg

1994-01-01

378

Modeling Soviet Defense Decisionmaking.

National Technical Information Service (NTIS)

My goal in this paper is to set out a simple model of decisionmaking in Soviet defense to help clarify the subject, but also to draw out disagreements and perhaps resolve misunderstandings by offering a structured, abstracted description. In this modeling...

A. J. Alexander

1980-01-01

379

A hybrid receptor model is a specified mathematical procedure which uses not only the ambient species concentration measurements that form the input data for a pure receptor model, but in addition source emission rates or atmospheric dispersion or transformation information chara...

380

NASA Technical Reports Server (NTRS)

The need to develop accurate models for secondary statistics of fading land mobile satellite signals has motivated a study of fading signal autocorrelations and multipath spectrum. Results of autocorrelations and power spectral densities from measured data are presented and comparisons to multipath spectrum models are made.

Stutzman, Warren L.; Barts, R. Michael

1989-01-01

381

National Technical Information Service (NTIS)

Wind tunnel testing of dynamically scaled models plays a key role in assuring that new or modified aircraft will be free of flutter within their flight envelopes. Dynamically scaled models are also widely used in research studies such as active control of...

R. Busan

1998-01-01

382

We show how to exploit symmetry in model checking for concurrent systems containing many identical or isomorphic components. We focus in particular on those composed of many isomorphic processes. In many cases we are able to obtain significant, even exponential, savings in the complexity of model checking.

E. Allen Emerson; A. Prasad Sistla

1993-01-01

383

Editor's Corner: Model Biology

NSDL National Science Digital Library

Models are at the core of the scientific enterprise. They help us make predictions, understand complex systems, generate new ideas, and visualize both the very large and the very small. The generation of models is the creative engine that drives scientifi

Metz, Steve

2011-02-01

384

A combined marco-micro model is applied to a population similar to that forecast for 2035 in the Netherlands in order to simulate the effect on kinship networks of a mating system of serial monogamy. The importance of incorporating a parameter for the degree of concentration of childbearing over the female population is emphasized. The inputs to the model are vectors

Jan Bartlema

1988-01-01

385

NSDL National Science Digital Library

In this quick activity (page 2 of PDF), learners will model how large depressions near the top of a volcano are formed by using an inflating and deflating balloon submerged in flour. The model illustrates how volcanic ground swells and collapses as pressure builds and drains from a magma reservoir. Relates to the linked video, DragonflyTV GPS: Lava Flow.

Twin Cities Public Television, Inc.

2007-01-01

386

Computer simulation is often used as an analysis tool during the design of Automated Guided Vehicle (AGV) systems. However, because of the complexities inherent in automated material handling systems, general-purpose simulation languages must be used creatively to capture the desired detail in the model. This paper presents some general concepts which can be used to model AGV systems. Also, some

Deborah A. Davis; Calder Sq

1986-01-01

387

Composite Load Model Evaluation

The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.

Lu, Ning; Qiao, Hong (Amy)

2007-09-30

388

National Technical Information Service (NTIS)

A model to predict the ballistic ricochet of rod penetrators is presented. The model is based on the premise that the phenomenology of ricochet is one where the impacting rod feeds into a plastic hinge located at the rod/ target interface, and is thus div...

S. B. Segletes

2004-01-01

389

I review current models used to interpret the spectra and variability of microquasars. Among other things, I discuss the structure of the accretion flow and its dependence on mass accretion rate, the intrinsic connection between hot comptonizing corona and compact radio jet in the hard state, as well as possible models for the spectral hysteresis observed during outbursts of transient

Julien Malzac

2007-01-01

390

In this paper we put forward a Bayesian approach for nding CART (classication andregression tree) models. The two basic components of this approach consist of priorspecication and stochastic search. The basic idea is to have the prior induce a posteriordistribution which will guide the stochastic search towards more promising CARTmodels. As the search proceeds, such models can then be selected

Hugh A. CHIPMAN; Edward I. GEORGE; Robert E. MCCULLOCH

1997-01-01

391

A new concept, valid prediction period (VPP), is presented here to evaluate model predictability. VPP is defined as the time period when the prediction error first exceeds a pre-determined criterion (i.e., the tolerance level). It depends not only on the instantaneous error growth, but also on the noise level, the initial error, and tolerance level. The model predictability skill is

P. C. Chu

2002-01-01

392

Designing and planning networks is often done by simulating the influence of various traffic types. This si- mulation approach depends on reliable and realistic traffic models that are capable of covering first- and second-order statistics of the observed network traffic. In this report, an overview over state-of-the-art models for the simulation of network traffic will be given.

Helmut Hlavacs; Gabriele Kotsis; Christine Steinkellner

1999-01-01

393

Foundations of biomolecular modeling.

The 2013 Nobel Prize in Chemistry has been awarded to Martin Karplus, Michael Levitt, and Arieh Warshel for "development of multiscale models for complex chemical systems." The honored work from the 1970s has provided a foundation for the widespread activities today in modeling organic and biomolecular systems. PMID:24315087

Jorgensen, William L

2013-12-01

394

Model Student Publications Code.

ERIC Educational Resources Information Center

This outline of a model student publications code is offered as a sample on which local districts can model a Publications Code of their own. The outline begins with a Preamble which explains First Amendment rights and student rights. It continues with the following sections: I. Statement of Policy; II. Protected Speech; III. Official Student…

Journalism Education Association.

395

NSDL National Science Digital Library

The Atmosphere-Ocean Model is a computer program that simulates the Earth's climate in three dimensions on a gridded domain. The model requires that the user enter two kinds of input, specified parameters and prognostic variables, and will generate two kinds of output, climatic diagnostics and prognostic variables.

396

Sinusoids: Applications and Modeling

NSDL National Science Digital Library

This demo actively involves students via the software simulations so that the determination of the sinusoidal model has a geometric flavor that complements the algebraic tools stressed in texts. This approach also introduces a modeling aspect since in some situations we may only be able to obtain a "close" approximation to the actual curve or data. Animations and Excel routines are included.

Roberts, Lila F.; Hill, David R.

2004-07-21

397

The Sun is the main energy source of the life on the Earth. Thus, solar radiation energy data and models are important for many areas of research and applications. Many parameters influence the amount of solar energy at a particular standing point of the Earth's surface; therefore, many solar radiation models were produced in the last few years. Solar radiation

Klemen Zaksek; Tomaz Podobnikar; Krištof Oštir

2005-01-01

398

NSDL National Science Digital Library

This activity is a combination outdoor/indoor lab where students will collect natural materials from the environment and use them to create both a mold and cast model of a fossil. Students will learn how a fossil is formed and why scientists use models to help them understand how things work and develop.

399

In order to characterize the transient dynamics of steam turbines subsections, in this paper, nonlinear mathematical models are first developed based on the energy balance, thermodynamic principles and semi-empirical equations. Then, the related parameters of developed models are either determined by empirical relations or they are adjusted by applying genetic algorithms (GA) based on experimental data obtained from a complete

Ali Chaibakhsh; Ali Ghaffari

2008-01-01

400

NSDL National Science Digital Library

The Poisson Distribution Model shows how to use the Apache Commons Math library (included in EJS) to generate random numbers that follow the Poisson distribution. A histogram of the numbers is displayed. This simple teaching example illustrates the use of the Histogram view element and how to speed up a simulation by running the model several time before updating the view.

Franciscouembre

2013-02-13

401

Storm Water Management Model (SWMM) is a comprehensive model for analysis of quantity and quality problems associated with urban runoff. Both single-event and continuous simulation may be performed on catchments having storm sewers, combined sewers, and natural drainage, for pred...

402

Modelling University Governance

ERIC Educational Resources Information Center

Twentieth century governance models used in public universities are subject to increasing doubt across the English-speaking world. Governments question if public universities are being efficiently governed; if their boards of trustees are adequately fulfilling their trust obligations towards multiple stakeholders; and if collegial models of…

Trakman, Leon

2008-01-01

403

Whole body pharmacokinetic models.

The aim of the current review is to summarise the present status of physiologically based pharmacokinetic (PBPK) modelling and its applications in drug research, and thus serve as a reference point to people interested in the methodology. The review is structured into three major sections. The first discusses the existing methodologies and techniques of PBPK model development. The second describes some of the most interesting PBPK model implementations published. The final section is devoted to a discussion of the current limitations and the possible future developments of the PBPK modelling approach. The current review is focused on papers dealing with the pharmacokinetics and/or toxicokinetics of medicinal compounds; references discussing PBPK models of environmental compounds are mentioned only if they represent considerable methodological developments or reveal interesting interpretations and/or applications.The major conclusion of the review is that, despite its significant potential, PBPK modelling has not seen the development and implementation it deserves, especially in the drug discovery, research and development processes. The main reason for this is that the successful development and implementation of a PBPK model is seen to require the investment of significant experience, effort, time and resources. Yet, a substantial body of PBPK-related research has been accumulated that can facilitate the PBPK modelling and implementation process. What is probably lagging behind is the expertise component, where the demand for appropriately qualified staff far outreaches availability. PMID:12885263

Nestorov, Ivan

2003-01-01

404

NSDL National Science Digital Library

The Space Ship Pilot model is a model of motion under Newton's laws with and without resistive forces. The first environment puts the user in control of docking a space shuttle, and the second puts the user in control of docking a boat.

Joiner, David; The Shodor Education Foundation, Inc.

405

We describe an algorithm for repairing polyhedral CAD models that have errors in their B-REP. Errors like cracks, degeneracie s, du- plication, holes and overlaps are usually introduced in sol id mod- els due to imprecise arithmetic, model transformations, de signer's fault, programming bugs, etc. Such errors often hamper furt her pro- cessing like finite element analysis, radiosity computatio n

Gill Barequet; Subodh Kumar

1997-01-01

406

The quark bag model is reviewed here with particular emphasis on spectroscopic applications and the discussion of exotic objects as baryonium, gluonium, and the quark phase of matter. The physical vacuum is pictured in the model as a two-phase medium. In normal phase of the vacuum, outside hadrons, the propagation of quark and gluon fields is forbidden. When small bubbles

Peter Hasenfratz; Julius Kuti

1978-01-01

407

Model optimizes exchanger cleaning

There are many simple models about fouling in the literature that calculate the optimum period for heat exchanger operation. Because of the assumptions and simplifications they contain, these models always yield, and often incorrectly, an optimum point. However, a more rigorous analysis indicates that the trend of fouling costs (which we try to minimize) can be increasing or decreasing, and

Casado

1990-01-01

408

This task addresses a number of issues that arise in multimedia modeling with an emphasis on interactions among the atmosphere and multiple other environmental media. Approaches for working with multiple types of models and the data sets are being developed. Proper software tool...

409

Illowra FLIR performance model

This paper describes the Illowra forward looking infrared (FLIR) performance model, which has been developed by the Defence Science and Technology Organisation (DSTO) and British Aerospace Australia (BAeA). The Illowra model enables calculations of system performance to be carried out for a wide range of FLIR technologies including thermal, photoconductive, and photovoltaic detector based systems used in various staring and

Grant Burfield; Duncan W. Craig; Murray R. Meharry; John W. Norrington; I. Tuohy

1993-01-01

410

Simulation has become an indispensable tool in the construction and evaluation of mobile systems. By using mobility models that describe constituent movement, one can explore large systems, producing repeatable results for comparison between alternatives. Unfortunately, the vast majority of mobility models---including all those in which nodal speed and distance or destination are chosen independently---suffer from decay; average speed decreases until

Jungkeun Yoon; Mingyan Liu; Brian Noble

2003-01-01

411

National Technical Information Service (NTIS)

The report describes a dynamic model of a traffic circle which has been implemented on a CRT display terminal. The model includes sufficient parameters to allow changes in the structure of the traffic circle, the frequency of traffic introduced to the cir...

I. Englander

1971-01-01

412

Modeling prosody: Different approaches

NASA Astrophysics Data System (ADS)

Prosody pervades all aspects of a speech signal, both in terms of raw acoustic outcomes and linguistically meaningful units, from the phoneme to the discourse unit. It is carried in the suprasegmental features of fundamental frequency, loudness, and duration. Several models have been developed to account for the way prosody organizes speech, and they vary widely in terms of their theoretical assumptions, organizational primitives, actual procedures of application to speech, and intended use (e.g., to generate speech from text vs. to model the prosodic phonology of a language). In many cases, these models overtly contradict one another with regard to their fundamental premises or their identification of the perceptible objects of linguistic prosody. These competing models are directly compared. Each model is applied to the same speech samples. This parallel analysis allows for a critical inspection of each model and its efficacy in assessing the suprasegmental behavior of the speech. The analyses illustrate how different approaches are better equipped to account for different aspects of prosody. Viewing the models and their successes from an objective perspective allows for creative possibilities in terms of combining strengths from models which might otherwise be considered fundamentally incompatible.

Carmichael, Lesley M.

2002-11-01

413

Dasymetric Modeling and Uncertainty

Dasymetric models increase the spatial resolution of population data by incorporating related ancillary data layers. The role of uncertainty in dasymetric modeling has not been fully addressed as of yet. Uncertainty is usually present because most population data are themselves uncertain, and/or the geographic processes that connect population and the ancillary data layers are not precisely known. A new dasymetric methodology - the Penalized Maximum Entropy Dasymetric Model (P-MEDM) - is presented that enables these sources of uncertainty to be represented and modeled. The P-MEDM propagates uncertainty through the model and yields fine-resolution population estimates with associated measures of uncertainty. This methodology contains a number of other benefits of theoretical and practical interest. In dasymetric modeling, researchers often struggle with identifying a relationship between population and ancillary data layers. The PEDM model simplifies this step by unifying how ancillary data are included. The P-MEDM also allows a rich array of data to be included, with disparate spatial resolutions, attribute resolutions, and uncertainties. While the P-MEDM does not necessarily produce more precise estimates than do existing approaches, it does help to unify how data enter the dasymetric model, it increases the types of data that may be used, and it allows geographers to characterize the quality of their dasymetric estimates. We present an application of the P-MEDM that includes household-level survey data combined with higher spatial resolution data such as from census tracts, block groups, and land cover classifications.

Nagle, Nicholas N.; Buttenfield, Barbara P.; Leyk, Stefan; Speilman, Seth

2014-01-01

414

Formalizing software ecosystem modeling

Currently there is no formal modeling standard for software ecosystems that models both the ecosystem and the environment in which software products and services operate. Major implications are (1) software vendors have trouble distinguishing the specific software ecosystems in which they are active and (2) they have trouble using these ecosystems to their strategic advantage. In this paper we present

Vasilis Boucharas; Slinger Jansen; Sjaak Brinkkemper

2009-01-01

415

NSDL National Science Digital Library

This exercise is a second or familiarization exercise in spreadsheeting, but is also a mathematical model for slope evolution. It uses the concept of "erosivity" (generally, the relative ratio of driving and resisting forces) and slope angle to reshape an initial topography. Finally, it asks the students themselves to come up with a real-world situation worth modeling.

Locke, Bill

416

Computer Virus Propagation Models

The availability of reliable models of computer virus propa- gation would prove useful in a number of ways, in order both to predict future threats, and to develop new containment measures. In this pa- per, we review the most popular models of virus propagation, analyzing the underlying assumptions of each of them, their strengths and their weaknesses. We also introduce

Giuseppe Serazzi; Stefano Zanero

2003-01-01

417

Ionospheric modelling for navigation

NASA Astrophysics Data System (ADS)

Signals transmitted to and from satellites for communication and navigation purposes must pass through the ionosphere Ionospheric irregularities most common at equatorial latitudes although they could occur anywhere can have a major impact on system performance and reliability and commercial navigation service satellite-based providers need to account for their effects For a GNSS single-frequency receiver the Slant Total Electron Content STEC must be known by the user through broadcast corrections In this context there are several sets of broadcast parameters that can be defined to take into account this ionospheric term The chosen model to generate the ionospheric correction coefficients for the present study is the NeQuick model although with a number of adaptations intended to improve effective ionospheric effect modelling performances The aim of this study is to describe a possible adaptation to the NeQuick model for real time purposes and suitable for single frequency users Therefore it will be necessary to determine the performance of this modified NeQuick model in correcting the ionospheric delay In order to generate the ionospheric corrections for single frequency receivers using the NeQuick model a certain approach should be followed to adapt the performance of NeQuick since this model was originally developed to provide TEC using averaged monthly information of the solar activity and not daily one Thus to use NeQuick for real time applications as an ionospheric broadcasted model such as Klobuchar solar daily information at the user point

Aragon Angel, M. A.

418

Probability Matrix Decomposition Models.

ERIC Educational Resources Information Center

Generalizing Boolean matrix decomposition to a larger class of matrix decomposition models is demonstrated, and probability matrix decomposition (PMD) models are introduced as a probabilistic version of the larger class. An algorithm is presented for the computation of maximum likelihood and maximum a posteriori estimates of the parameters of PMD…

Maris, Eric; And Others

1996-01-01

419

National Technical Information Service (NTIS)

Beam propagation codes such as the hose simulation code VIPER require simple models for treating the generation of conductivity by the beam pulse. The VIPER conductivity model calculates the electron density n sub e (r,zeta,z) where zeta = ct-z is the dis...

R. F. Hubbard S. P. Slinker

1986-01-01

420

National Technical Information Service (NTIS)

The one-dimensional dynamic state of the ground model FASST (Fast All-season Soil Strength) is a state of the ground model developed by Frankenstein and Koenig (2004) as part of the Army's Battlespace Terrain Reasoning and Awareness (BTRA) research progra...

S. Frankenstein G. Koenig

2004-01-01

421

A viscous-type dynamic hysteresis model (DHM) is developed. The DHM is compatible with static underlying model of any type and nature (Preisach or non-Preisach). The distinguishing features of the DHM are its arbitrary frequency dependence and the ability to control the shape of the dynamic hysteresis loop. The numerical method for the incorporation of the DHM in magnetodynamic computations is

S. E. Zirka; Y. I. Moroz; P. Marketos; A. J. Moses

2004-01-01

422

We consider the case where inconsistencies are present be- tween a system and its corresponding model, used for automatic ver- ication. Such inconsistencies can be the result of modeling errors or recent modications of the system. Despite such discrepancies we can still attempt to perform automatic verication. In fact, as we show, we can sometimes exploit the verication results to

Alex Groce; Doron Peled; Mihalis Yannakakis

2002-01-01

423

ERIC Educational Resources Information Center

The authors examined how situation models are updated during text comprehension. If comprehenders keep track of the evolving situation, they should update their models such that the most current information, the here and now, is more available than outdated information. Contrary to this updating hypothesis, E. J. O'Brien, M. L. Rizzella, J. E.…

Zwaan, Rolf A.; Madden, Carol J.

2004-01-01

424

Spiral model pilot project information model

NASA Technical Reports Server (NTRS)

The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

1991-01-01

425

NASA Technical Reports Server (NTRS)

Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.

Badler, N. I.; Lee, P.; Wong, S.

1985-01-01

426

NASA Astrophysics Data System (ADS)

A model describing rainfall erosion over the course of a long time period is proposed. The model includes: (1) a new equation of detachment of soil particles by water flows based on the Mirtskhulava equation; (2) a new equation for the transport capacity of the flow based on a modified Bagnold equation, which is used in the AGNPS model; (3) modified SCS runoff equation; (4) probability distributions for rainfall. The proposed equations agree satisfactorily with the data of on-site observations of the Moldova and Nizhnedevitsk water-balance stations. The Monte Carlo method is used for numerical modeling of random variables. The results of modeling agree satisfactorily with empirical equations developed for conditions in Russia and the United States. The effect of climatic conditions on the dependence of longtime average annual soil loss on various factors is analyzed. Minimum information is used for assigning the initial data.

Sukhanovskii, Yu. P.

2010-09-01

427

NASA Astrophysics Data System (ADS)

This paper details the development of a modularized system level model of a sensor whose detector dimensions may be small with respect to the distance between adjacent detectors. The effects of individual system components and characteristics such as target to background properties, collection optics, detectors, and classifiers will be modeled. These individual effects will then be combined to provide an overall system performance model. The model will facilitate design trade offs for Unattended Ground Sensors. The size and power restrictions of these sensors often preclude these sensors from being effective in high resolution applications such as target identification. Consequently, existing imager performance models are not directly applicable. However, these systems are well suited for applications such as broad scale classifications or differentiations between targets such as humans, animals or small vehicles. Furthermore, these sensors do not have to be spaced closely together to be effective in these applications. Therefore, the demand for these sensors is increasing for both the military and homeland security.

Robinson, Aaron L.; Halford, Carl E.; Perry, Edward; Wyatt, Thomas

2008-05-01

428

NSDL National Science Digital Library

The EJS Molecular Dynamics model is constructed using the Lennard-Jones potential truncated at a distance of 3 molecular diameters. The motion of the molecules is governed by Newton's laws, approximated using the Verlet algorithm with the indicated time step. For sufficiently small time steps dt, the system's total energy should be approximately conserved. The Molecular Dynamics model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_stp_md_MolecularDynamics.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for classical mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2008-11-16

429

Molecular Dynamics Demonstration Model

NSDL National Science Digital Library

The EJS Molecular Dynamics Demonstration model is constructed using the Lennard-Jones potential truncated at a distance of 3 molecular diameters. The motion of the molecules is governed by Newton's laws, approximated using the Verlet algorithm with the indicated Time step. For sufficiently small time steps dt, the system's total energy should be approximately conserved. Users can select various initial configurations using the drop down menu. Ejs Molecular Dynamics Demonstration model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the js_stp_md_MolecularDynamicsDemo.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for statistical mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2008-11-15

430

The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed.

Hughes, T.J.; Fastook, J.L. [Univ. of Maine, Orono, ME (United States). Institute for Quaternary Studies

1994-05-01

431

NSDL National Science Digital Library

The EJS Binomial Distribution Model calculates the binomial distribution. You can change the number of trials and probability. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item. The Binomial Distribution Model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_stp_BinomialDistribution.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2009-04-23

432

Uniform Spherical Distribution Model

NSDL National Science Digital Library

The EJS Uniform Spherical Distribution Model shows how to pick a random point on the surface of a sphere. It shows a distribution generated by (incorrectly) picking points using a uniform random distribution as well as the correct weighted distribution. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item. The Uniform Spherical Distribution Model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_stp_UniformSphericalDistribution.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2009-04-27

433

Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.

Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.; Nozick, Linda Karen; Martin, Nathaniel; Bandlow, Alisa; Detry, Richard Joseph; Evans, Leland B.; Berger, Taylor Eugen

2013-09-01

434

Stochastic patch exploitation model

A solitary animal is foraging in a patch consisting of discrete prey items. We develop a stochastic model for the accumulation of gain as a function of elapsed time in the patch. The model is based on the waiting times between subsequent encounters with the prey items. The novelty of the model is in that it renders possible–via parameterization of the waiting time distributions: the incorporation of different foraging situations and patch structures into the gain process. The flexibility of the model is demonstrated with different foraging scenarios. Dependence of gain expectation and variance of the parameters of the waiting times is studied under these conditions. The model allows us to comment upon some of the basic concepts in contemporary foraging theory.

Rita, H.; Ranta, E.

1998-01-01

435

Glaucoma is a heterogeneous group of disorders that progressively lead to blindness due to loss of retinal ganglion cells and damage to the optic nerve. It is a leading cause of blindness and visual impairment worldwide. Although research in the field of glaucoma is substantial, the pathophysiologic mechanisms causing the disease are not completely understood. A wide variety of animal models have been used to study glaucoma. These include monkeys, dogs, cats, rodents, and several other species. Although these models have provided valuable information about the disease, there is still no ideal model for studying glaucoma due to its complexity. In this paper we present a summary of most of the animal models that have been developed and used for the study of the different types of glaucoma, the strengths and limitations associated with each species use, and some potential criteria to develop a suitable model.

A. Bouhenni, Rachida; Dunmire, Jeffrey; Sewell, Abby; Edward, Deepak P.

2012-01-01

436

Stratiform chromite deposit model

Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.

Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R. II

2010-01-01

437

NASA Astrophysics Data System (ADS)

We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counterrotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counterrotating terms. The exact energy spectrum and eigenstates of the generalized model are worked out. The solution is obtained as an elaboration of a recently proposed method for the isotropic limit of the model. In this way, we provide a long-sought solution of a cascade of models with immediate relevance in different physical fields, including (i) quantum optics, a two-level atom in single-mode cross-electric and magnetic fields; (ii) solid-state physics, electrons in semiconductors with Rashba and Dresselhaus spin-orbit coupling; and (iii) mesoscopic physics, Josephson-junction flux-qubit quantum circuits.

Xie, Qiong-Tao; Cui, Shuai; Cao, Jun-Peng; Amico, Luigi; Fan, Heng

2014-04-01

438

Modeling solar coronal streamers

NASA Technical Reports Server (NTRS)

Coronal streamer models must now make the transition from research projects to applications tools. SOHO (Solar and Heliospheric Observatory) will be making measurements on plasma parameters in and around streamers and a quantitative model will be required for interpreting the data. The reason for this is that a streamer is inherently a magnetohydrodynamic phenomenon; its properties are determined both by the magnetic field and the dynamics of the plasma. The purpose of the model will be to analyze the energetics of the streamer and place the results into the context of the magnetic field. Current streamer modeling, and why it has taken so long for these models to develop into useful tools, is described and indications on where further development is needed, are given.

Suess, Steven T.

1992-01-01

439

Linear models: permutation methods

Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

Cade, B.S.

2005-01-01

440

We present a twin Higgs model based on left-right symmetry with a tree level quartic. This is made possible by extending the symmetry of the model to include two Z{sub 2} parities, each of which is sufficient to protect the Higgs from getting a quadratically divergent mass squared. Although both parities are broken explicitly, the symmetries that protect the Higgs from getting a quadratically divergent mass are broken only collectively. The quadratic divergences of the Higgs mass are thus still protected at one loop. We find that the fine-tuning in this model is reduced substantially compared to the original left-right twin Higgs model. This mechanism can also be applied to the mirror twin Higgs model to get a significant reduction of the fine-tuning, while keeping the mirror photon massless.

Goh, Hock-Seng; Krenke, Christopher A. [Department of Physics, University of Arizona, Tucson, Arizona 85721 (United States)

2007-12-01

441

NASA Technical Reports Server (NTRS)

Mathematical climate modelling has matured as a discipline to the point that it is useful in paleoclimatology. As an example a new two dimensional energy balance model is described and applied to several problems of current interest. The model includes the seasonal cycle and the detailed land-sea geographical distribution. By examining the changes in the seasonal cycle when external perturbations are forced upon the climate system it is possible to construct hypotheses about the origin of midlatitude ice sheets and polar ice caps. In particular the model predicts a rather sudden potential for glaciation over large areas when the Earth's orbital elements are only slightly altered. Similarly, the drift of continents or the change of atmospheric carbon dioxide over geological time induces radical changes in continental ice cover. With the advance of computer technology and improved understanding of the individual components of the climate system, these ideas will be tested in far more realistic models in the near future.

North, G. R.; Crowley, T. J.

1984-01-01

442

These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.

Peskin, M.E.

1997-05-01

443

A new two-parameter probability distribution called hypertabastic is introduced to model the survival or time-to-event data. A simulation study was carried out to evaluate the performance of the hypertabastic distribution in comparison with popular distributions. We then demonstrate the application of the hypertabastic survival model by applying it to data from two motivating studies. The first one demonstrates the proportional hazards version of the model by applying it to a data set from multiple myeloma study. The second one demonstrates an accelerated failure time version of the model by applying it to data from a randomized study of glioma patients who underwent radiotherapy treatment with and without radiosensitizer misonidazole. Based on the results from the simulation study and two applications, the proposed model shows to be a flexible and promising alternative to practitioners in this field.

Tabatabai, Mohammad A; Bursac, Zoran; Williams, David K; Singh, Karan P

2007-01-01

444

A new two-parameter probability distribution called hypertabastic is introduced to model the survival or time-to-event data. A simulation study was carried out to evaluate the performance of the hypertabastic distribution in comparison with popular distributions. We then demonstrate the application of the hypertabastic survival model by applying it to data from two motivating studies. The first one demonstrates the proportional hazards version of the model by applying it to a data set from multiple myeloma study. The second one demonstrates an accelerated failure time version of the model by applying it to data from a randomized study of glioma patients who underwent radiotherapy treatment with and without radiosensitizer misonidazole. Based on the results from the simulation study and two applications, the proposed model shows to be a flexible and promising alternative to practitioners in this field. PMID:17963492

Tabatabai, Mohammad A; Bursac, Zoran; Williams, David K; Singh, Karan P

2007-01-01

445

NSDL National Science Digital Library

The EJS Ballistics and Orbits model displays ballistic trajectories near the Earth. The model shows the trajectory with respect to the inertial coordinate system and the trajectory as seen from a point of view that is co-rotating with the Earth. You can examine and modify this simulation if you have EJS installed by right-clicking within the plot and selecting âOpen EJS Modelâ from the pop-up menu item. The Ballistics and Orbits model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_nl_teunissen_ballistics_and_orbits.jar file will run the program if Java is installed. Additional information about this model can be found by visiting the authorâs web site: http://www.cleonis.nl/index.htm.

Teunissen, Cleon

2009-11-03

446

Integrated Workforce Modeling System

NASA Technical Reports Server (NTRS)

There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.

Moynihan, Gary P.

2000-01-01

447

NASA Technical Reports Server (NTRS)

Attempts to place compact radio sources within a general interpretive framework for galactic nuclear activity, or Grand Unified Models, are reviewed. The study of superluminal motion is discussed, stressing the need for more objective methods to compare VLBI maps of compact radio sources at different epochs. Observations and arguments for and against the importance of relativistic beaming in compact radio sources are presented. The simple source model, in which the observed components move with uniform velocity along the source symmetry axis away from a stationary self-absorbed core is examined and suggestions for modifying the model are given. The parabolic spectra of blazars are discussed. The black hole model for explaining general activity in the nuclei of galaxies and radio emission and the incorporation of the apparent evolutionary properties of active galactic nuclei into grand models are also examined.

Blandford, Roger D.

1987-01-01

448

NSDL National Science Digital Library

The Ejs Spherical Pendulum model displays the dynamics of a spherical pendulum in three dimensions. The pendulum is initially displaced from equilibrium and the pendulum bob has zero initial velocity. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item. Ejs Spherical Pendulum model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_osc_Pendulum3D.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for Newtonian mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2008-07-05

449

NASA Astrophysics Data System (ADS)

We develop a composite polytropic solar model characterized by three ploytropic indices. This phenomenological model can represent some of the main features of the standard solar model (SSM). It is used to study neutrino production and various solar properties. Three solar models with lower core temperatures than that predicted by the SSM are considered. We find that lowering the core temperature by 3 percent requires changes of the opacity of the middle layer by 32-35 percent, a factor of three larger than the current uncertainty. Thus, one does not expect that the solar neutrino puzzle can be solved within the framework of the SSM. The calculation of the asymptotic p-mode frequencies reveal the need to modify the polytropic solar model. We also find that the p-mode frequencies are not sensitive to the changes of the central temperature. Finally, the Mikheyev-Smirnov-Wolfenstein effect is studied and considered as a solution to the solar neutrino problem.

Tsai, Jongni

450

Modeling developmental cognitive neuroscience.

In the past few years connectionist models have greatly contributed to formulating theories of cognitive development. Some of these models follow the approach of developmental cognitive neuroscience in exploring interactions between brain development and cognitive development by integrating structural change into learning. We describe two classes of these models. The first focuses on experience-dependent structural elaboration within a brain region by adding or deleting units and connections during learning. The second models the gradual integration of different brain areas based on combinations of experience-dependent and maturational factors. These models provide new theories of the mechanisms of cognitive change in various domains and they offer an integrated framework to study normal and abnormal development, and normal and impaired adult processing. PMID:16603407

Westermann, Gert; Sirois, Sylvain; Shultz, Thomas R; Mareschal, Denis

2006-05-01

451

NSDL National Science Digital Library

The EJS Radioactive Decay Model simulates the decay of a radioactive sample using discrete random events. It displays the number of radioactive nuclei as a function of time. You can change the initial number of nuclei and the decay constant as well as changing the plot to a semi-log plot. The Radioactive Decay model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_ms_explicit_RadioactiveDecay.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Christian, Wolfgang

2009-01-23

452

Modelling the Bicoid gradient.

Morphogen gradients provide embryonic tissues with positional information by inducing target genes at different concentration thresholds and thus at different positions. The Bicoid morphogen gradient in Drosophila melanogaster embryos has recently been analysed quantitatively, yet how it forms remains a matter of controversy. Several biophysical models that rely on production, diffusion and degradation have been formulated to account for the observed dynamics of the Bicoid gradient, but no one model can account for all its characteristics. Here, we discuss how existing data on this gradient fit the various proposed models and what aspects of gradient formation these models fail to explain. We suggest that knowing a few additional parameters, such as the lifetime of Bicoid, would help to identify and develop better models of Bicoid gradient formation. PMID:20570935

Grimm, Oliver; Coppey, Mathieu; Wieschaus, Eric

2010-07-01

453

Active Shape Models - 'Smart Snakes

We describe 'Active Shape Models' which iteratively adapt to refine esti- mates of the pose, scale and shape of models of image objects. The method uses flexible models derived from sets of training examples. These models, known as Point Distribution Models, represent objects as sets of labelled points. An initial estimate of the location of the model points in an

T. F. Cootes; C. J. Taylor

1992-01-01

454

Animal models of bipolar disorder

Animal models of human diseases should meet three sets of criteria: construct validity, face validity, and predictive validity. To date, several putative animal models of bipolar disorder have been reported. They are classified into various categories: pharmacological models, nutritional models, environmental models, and genetic models. None of them, however, totally fulfills the three validity criteria, and thus may not be

Tadafumi Kato; Mie Kubota; Takaoki Kasahara

2007-01-01

455

Modeling and Motor Skill Acquisition

This paper reviews the psychological and motor performance modeling literature to identify important factors involved in and affecting the modeling process as it relates to motor skill acquisition. Topics discussed include modeling theory, task specificity of modeling effects, the importance of symbolic coding, temporal spacing of demonstrations, social factors influencing modeling, and the role of modeling in reducing anxiety when

Daniel R. Gould; Glyn C. Roberts

1981-01-01

456

Acoustic models and sonar systems

The basic types of acoustic models are reviewed. These include ray models, spectral integral models, normal mode models, parabolic equation modeling, and 3-D acoustic modeling. Their application to conventional sonar simulation problems is demonstrated. Examples of their use in more advanced signal processing applications are presented

Michael B. Porter; Z MEDINA BANK; X BALTIC

1993-01-01

457

Saturn Radiation (SATRAD) Model

NASA Technical Reports Server (NTRS)

The Saturnian radiation belts have not received as much attention as the Jovian radiation belts because they are not nearly as intense-the famous Saturnian particle rings tend to deplete the belts near where their peak would occur. As a result, there has not been a systematic development of engineering models of the Saturnian radiation environment for mission design. A primary exception is that of Divine (1990). That study used published data from several charged particle experiments aboard the Pioneer 1 1, Voyager 1, and Voyager 2 spacecraft during their flybys at Saturn to generate numerical models for the electron and proton radiation belts between 2.3 and 13 Saturn radii. The Divine Saturn radiation model described the electron distributions at energies between 0.04 and 10 MeV and the proton distributions at energies between 0.14 and 80 MeV. The model was intended to predict particle intensity, flux, and fluence for the Cassini orbiter. Divine carried out hand calculations using the model but never formally developed a computer program that could be used for general mission analyses. This report seeks to fill that void by formally developing a FORTRAN version of the model that can be used as a computer design tool for missions to Saturn that require estimates of the radiation environment around the planet. The results of that effort and the program listings are presented here along with comparisons with the original estimates carried out by Divine. In addition, Pioneer and Voyager data were scanned in from the original references and compared with the FORTRAN model s predictions. The results were statistically analyzed in a manner consistent with Divine s approach to provide estimates of the ability of the model to reproduce the original data. Results of a formal review of the model by a panel of experts are also presented. Their recommendations for further tests, analyses, and extensions to the model are discussed.

Garrett, H. B.; Ratliff, J. M.; Evans, R. W.

2005-01-01

458

NASA Astrophysics Data System (ADS)

Positron Emission Tomography is a well-established technique that allows imaging and quantification of tissue properties in-vivo. The goal of pharmacokinetic modelling is to estimate physiological parameters, e.g. perfusion or receptor density from the measured time course of a radiotracer. After a brief overview of clinical application of PET, we summarize the fundamentals of modelling: distribution volume, Fick's principle of local balancing, extraction and perfusion, and how to calculate equilibrium data from measurements after bolus injection. Three fundamental models are considered: (i) the 1-tissue compartment model, e.g. for regional cerebral blood flow (rCBF) with the short-lived tracer [15O]water, (ii) the 2-tissue compartment model accounting for trapping (one exponential + constant), e.g. for glucose metabolism with [18F]FDG, (iii) the reversible 2-tissue compartment model (two exponentials), e.g. for receptor binding. Arterial blood sampling is required for classical PET modelling, but can often be avoided by comparing regions with specific binding with so called reference regions with negligible specific uptake, e.g. in receptor imaging. To estimate the model parameters, non-linear least square fits are the standard. Various linearizations have been proposed for rapid parameter estimation, e.g. on a pixel-by-pixel basis, for the prize of a bias. Such linear approaches exist for all three models; e.g. the PATLAK-plot for trapping substances like FDG, and the LOGAN-plot to obtain distribution volumes for reversibly binding tracers. The description of receptor modelling is dedicated to the approaches of the subsequent lecture (chapter) of Millet, who works in the tradition of Delforge with multiple-injection investigations.

Müller-Schauenburg, Wolfgang; Reimold, Matthias

459

Complexity regularized hydrological model selection

NASA Astrophysics Data System (ADS)

Ill-posed hydrological model selection problems (that may be unstable or have non-unique solutions) are regularized with hydrological model complexity as the stabilizer. We propose and apply a notion of model complexity, based on Vapnik-Chervonenkis generalization theory, to complexity regularized hydrologic model selection. Better hydrologic models (better performance on future unseen data) on small sample sizes are identified using complexity regularized model selection than when using traditional model selection (without regularization) while both converge in performance for large samples (i.e. regularized model selection is 'consistent'). Case studies using SAC-SMA, SIXPAR and flexible model structures are used to 1) compute and compare model complexities of different model structures, 2) demonstrate the 'consistency' of complexity regularized model selection and 3) demonstrate that regularized model selection identifies the best model structure (out of a set of competing structures) on small sample sizes better than un-regularized model selection.

Arkesteijn, Liselot; Pande, Saket; Savenije, Hubert

2014-05-01

460

Tsunami Modeling: Development of Benchmarked Models

NASA Astrophysics Data System (ADS)

We discuss the progress towards the development of benchmarked models for forecasting tsunami inundation. Tsunami hydrodynamics has progressed slower than research in other natural hazards, because for several decades only the largest tsunamis were being reported. With the exception of the 1960 and 1964 events, there had been only qualitative information on inundation. While the basic equations for analysis have been known for decades, the existing synthesis leading to real time forecasts as currently available had to await the development of sophisticated modeling tools, the large-scale laboratory experiments in the 1980s-1990s and the tsunameter recordings of 2003 and since. The field survey results in the 1990s (Synolakis and Okal, 2005) served as crude proxies to free-field tsunami recordings and allowed for the validation and verification of numerical procedures. State-of-the-art inundation and forecasting codes have evolved through a painstaking process of careful validation and verification which can be traced back to the 1990 NSF Catalina workshop on Long-Wave Runup Models (Liu et al., 1991). Operational tsunami forecasting was only made possible through the availability of deep ocean measurements. We will describe this journey from development of the basic field equations to forecasts, through the scientific milestones that served as benchmarks and reality checks. In summary, as research in live networks -where problems and solution ideas arise spontaneously- tsunami hydrodynamic modeling was driven by milestone scientific meetings, and post tsunami surveys that kept identifying novel problem geometries and previously unrecognized phenomena. We discuss necessary validation and verification steps for numerical codes to be used for inundation mapping, design and operations (Synolakis et al., 2007). Liu, P. L.-F., C. E. Synolakis and H. H. Yeh, 1991. Report on the International Workshop on Long-Wave Run- up. J. Fluid Mech., 229, 675-688. Synolakis, C. E. and E. A. Okal, 2005. 1992-2002: perspective on a decade of post tsunami surveys. Adv. Nat. Technol. Hazards, 23, 1-30. Synolakis, C. E., E. N. Bernard, V. V. Titov, U. Kanoglu and F. Gonzalez, 2007. Standards, criteria, and procedures for NOAA evaluation of tsunami numerical models. NOAA OAR Special Report, Contribution No 3053, NOAA/OAR/PMEL, Seattle, WA, 55 pp.

Kanoglu, U.; Synolakis, C. E.

2008-12-01

461

Mouse Models of Arteriosclerosis

Animal models are designed to be preliminary tools for better understanding of the pathogenesis, improvement in diagnosis, prevention, and therapy of arteriosclerosis in humans. Attracted by the well-defined genetic systems, a number of investigators have begun to use the mouse as an experimental system for arteriosclerosis research. Hundreds of inbred lines have been established, and the genetic map is relatively well defined, and both congenic strains and recombinant strains are available to facilitate genetic experimentation. Because arteriosclerosis is a complicated disease, which includes spontaneous (native) atherosclerosis, transplant arteriosclerosis, vein graft atherosclerosis, and angioplasty-induced restenosis, several mouse models for studying all types of arteriosclerosis have recently been established. Using these mouse models, much knowledge concerning the pathogenesis of the disease and therapeutic intervention has been gained, eg, origins of endothelial and smooth muscle cells in lesions of transplant and vein graft atherosclerosis. This review will not attempt to cover all aspects of mouse models, rather focus on models of arterial injuries, vein grafts, and transplant arteriosclerosis, by which the major progress in understanding the mechanisms of the disease has been made. This article will also point out (dis)advantages of a variety of models, and how the models can be appropriately chosen for different purposes of study.

Xu, Qingbo

2004-01-01

462

NSDL National Science Digital Library

The EJS Phases of Moon model displays the appearance of Moon and how it changes depending on the position of Moon relative to Earth and Sun. The main window shows Earth (at the center) and Moon, as well as a circle tracing out Moon's orbit. Sun is far to the right in this picture and therefore the right side of Earth and Moon are bright while the left sides are dark. By using the Options Menu the Moon View window shows the appearance of Moon as seen from Earth when Moon is in the position shown in the main window. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting "Open Ejs Model" from the pop-up menu item. The EJS Phases of Moon model includes three supplemental documents (see below) that include a middle school lesson plan, a college level worksheet, and the student version of the program. EJS Phases of Moon model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_astronomy_MoonPhases.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for astronomy are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Timberlake, Todd

2009-08-01

463

NASA Astrophysics Data System (ADS)

Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

Max, G.

2011-01-01

464

Modeling the transition region

NASA Technical Reports Server (NTRS)

The calculation of engineering flows undergoing laminar-turbulent transition presents special problems. Mean-flow quantities obey neither the fully laminar nor the fully turbulent correlations. In addition, local maxima in skin friction, wall temperature, and heat transfer often occur near the end of the transition region. Traditionally, modeling this region has been important for the design of turbine blades, where the transition region is long in relation to the chord length of the blade. More recently, the need for better transition-region models has been recognized by designers of hypersonic vehicles where the high Mach number, the low Reynolds number, and the low-disturbance flight environment emphasize the importance of the transition region. Needless to say, a model that might work well for the transitional flows typically found in gas turbines will not necessarily work well for the external surface of a hypersonic vehicle. In Section 2 of this report, some of the important flow features that control the transition region will be discussed. In Section 3, different approaches to the modeling problem will be summarized and cataloged. Fully turbulent flow models will be discussed in detail in Section 4; models specifically designed for transitional flow, in Section 5; and the evaluation of models, in Section 6.

Singer, Bart A.

1994-01-01

465

NASA Astrophysics Data System (ADS)

The Common Land Model (CLM) was developed for community use by a grassroots collaboration of scientists who have an interest in making a general land model available for public use and further development. The major model characteristics include enough unevenly spaced layers to adequately represent soil temperature and soil moisture, and a multilayer parameterization of snow processes; an explicit treatment of the mass of liquid water and ice water and their phase change within the snow and soil system; a runoff parameterization following the TOPMODEL concept; a canopy photosynthesis-conductance model that describes the simultaneous transfer of CO2 and water vapor into and out of vegetation; and a tiled treatment of the subgrid fraction of energy and water balance. CLM has been extensively evaluated in offline mode and coupling runs with the NCAR Community Climate Model (CCM3). The results of two offline runs, presented as examples, are compared with observations and with the simulation of three other land models [the Biosphere-Atmosphere Transfer Scheme (BATS), Bonan's Land Surface Model (LSM), and the 1994 version of the Chinese Academy of Sciences Institute of Atmospheric Physics LSM (IAP94)].

Dai, Yongjiu; Zeng, Xubin; Dickinson, Robert E.; Baker, Ian; Bonan, Gordon B.; Bosilovich, Michael G.; Denning, A. Scott; Dirmeyer, Paul A.; Houser, Paul R.; Niu, Guoyue; Oleson, Keith W.; Schlosser, C. Adam; Yang, Zong-Liang

2003-08-01

466

NASA Astrophysics Data System (ADS)

We discuss the possibility that a Z' gauge boson, predicted by a large class of models in which the Higgs doublet is charged under U(1)', can have a reduced gauge coupling and can therefore be quite light, even below the 1 TeV scale, avoiding present experimental constraints. This generic possibility is discussed within the framework of the class of supersymmetric E6-inspired models as an example. Since the Z' can be light, we refer to such models as Little Z' models. One of the main motivations for such models is to reduce the fine-tuning due to the current experimental limits on the Z' mass. As we point out, such fine-tuning arises when the Higgs doublets are charged under the extra U(1)' gauge group leading to large additional D-terms in the Higgs potential. We show that reducing the value of the extra gauge coupling relaxes the experimental limits, leading to the possibility of low-mass Z' resonances, for example down to 200 GeV, which may yet appear in LHC searches. Although the source of tree-level fine-tuning due to the Z' mass is reduced, it typically does so at the expense of increasing the vacuum expectation value of the U(1)'-breaking standard model singlet field, reducing the fine-tuning to levels similar to that in the minimal supersymmetric standard model.

Belyaev, Alexander; King, Stephen F.; Svantesson, Patrik

2013-08-01

467

Multiscale Cloud System Modeling

NASA Technical Reports Server (NTRS)

The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

Tao, Wei-Kuo; Moncrieff, Mitchell W.

2009-01-01

468

Damping models in elastography

NASA Astrophysics Data System (ADS)

Current optimization based Elastography reconstruction algorithms encounter difficulties when the motion approaches resonant conditions, where the model does a poor job of approximating the real behavior of the material. Model accuracy can be improved through the addition of damping effects. These effects occur in-vivo due to the complex interaction between microstructural elements of the tissue; however reconstruction models are typically formulated at larger scales where the structure can be treated as a continuum. Attenuation behavior in an elastic continuum can be described as a mixture of inertial and viscoelastic damping effects. In order to develop a continuum damping model appropriate for human tissue, the behavior of each aspect of this proportional, or Rayleigh damping needs to be characterized. In this paper we investigate the nature of these various damping representations with a goal of best describing in-vivo behavior of actual tissue in order to improve the accuracy and performance of optimization based elastographic reconstruction. Inertial damping effects are modelled using a complex density, where the imaginary part is equivalent to a damping coefficient, and the effects of viscoelasticity are modelled through the use of complex shear moduli, where the real and imaginary parts represent the storage and loss moduli respectively. The investigation is carried out through a combination of theoretical analysis, numerical experiment, investigation of gelatine phantoms and comparison with other continua such as porous media models.

McGarry, Matthew D. J.; Berger, Hans-Uwe; Van Houten, Elijah E. W.

2007-03-01

469

Modelling urban growth patterns

NASA Astrophysics Data System (ADS)

CITIES grow in a way that might be expected to resemble the growth of two-dimensional aggregates of particles, and this has led to recent attempts1á¤-3 to model urban growth using ideas from the statistical physics of clusters. In particular, the model of diffusion-limited aggregation4,5 (DLA) has been invoked to rationalize the apparently fractal nature of urban morphologies1. The DLA model predicts that there should exist only one large fractal cluster, which is almost perfectly screened from incoming á¤~development unitsá¤™ (representing, for example, people, capital or resources), so that almost all of the cluster growth takes place at the tips of the clusterá¤™s branches. Here we show that an alternative model, in which development units are correlated rather than being added to the cluster at random, is better able to reproduce the observed morphology of cities and the area distribution of sub-clusters (á¤~towns') in an urban system, and can also describe urban growth dynamics. Our physical model, which corresponds to the correlated percolation model6á¤-8 in the presence of a density gradient9, is motivated by the fact that in urban areas development attracts further development. The model offers the possibility of predicting the global properties (such as scaling behaviour) of urban morphologies.

Makse, Hernán A.; Havlin, Shlomo; Stanley, H. Eugene

1995-10-01

470

Reduced gradient bubble model.

An approach to decompression modeling, the reduced gradient bubble model (RGBM), is developed from the critical phase hypothesis. The phase limit is introduced, extended, and applied within bubble-nucleation theory proposed by Yount. Much is different in the RGBM algorithm, on both theoretical and applied sides, with a focus on permissible bubble excesses rather than just dissolved gas buildup, something of a departure from traditional models. Overall, the approach is conservative, with changes in parameter settings affording flexibility. Marginal profiles permitted by tables and meters are restricted by the bubble algorithm. Highlighted features of the conservative algorithm include: (1) reduced no-stop time limits from the varying-permeability model (VPM); (2) short safety stops (or shallow swimming ascents) in the 10-20 feet of sea water (fsw) zone; (3) ascent and descent rates of 60 fsw/min, or slower; (4) restricted repetitive exposures, particularly beyond 100 fsw, based on reduced permissible bubble excess; (5) restricted spike (shallow-to-deep) exposures based on excitation of additional micronuclei; (6) restricted multi-day activity based on regeneration of micronuclei; (7) consistent treatment of altitude diving within model framework; (8) algorithm linked to bubble-nucleation theory and experiment. Coupled to medical reports about the long term effects of breathing pressurized gases and shortcomings in dissolved gas models, conservative modeling seems prudent. PMID:2276850

Wienke, B R

1990-11-01

471

SPAR Model Structural Efficiencies

The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

John Schroeder; Dan Henry

2013-04-01

472

Multiscale modeling of chalcogenides

NASA Astrophysics Data System (ADS)

Chalcogenide glasses exhibit unique properties applicable to a wide range of fields, including electrical and optical switching and the transmission of infrared radiation. In this thesis, we adopt a hierarchical multiscale modeling approach to investigate the fundamental physics of chalcogenide systems. Our multiscale modeling begins in Part I at the quantum mechanical level, where we use the highly accurate Moller-Plesset perturbation technique to derive interaction potentials for elemental and heterogeneous chalcogenide systems. The resulting potentials consist of two-, three-, and effective four-body terms. In Part II, we use these ab initio potentials in classical Monte Carlo simulations to investigate the structure of chalcogenide glasses. We discuss our simulation results in relation to the Phillips model of topological constraints, which predicts critical behavior in chalcogenide systems as a function of average coordination number. Finally, in Part III we address the issue of glass transition range behavior. After reviewing previous models of the glass transition, we derive a new model based on nonequilibrium statistical mechanics and an energy landscape formalism. The new model requires as input a description of inherent structure energies and the transition energies between these structures. To address this issue, we derive an eigenvector-following technique for mapping a multidimensional potential energy landscape. This technique is then extended for application to enthalpy landscapes. Our model will enable the first-ever calculation of glass transition behavior based on only ab initio physics.

Mauro, John C.

473

Atmospheric Models for Aerocapture

NASA Technical Reports Server (NTRS)

There are eight destinations in the solar System with sufficient atmosphere for aerocapture to be a viable aeroassist option - Venus, Earth, Mars, Jupiter, Saturn and its moon Titan, Uranus, and Neptune. Engineering-level atmospheric models for four of these targets (Earth, Mars, Titan, and Neptune) have been developed for NASA to support systems analysis studies of potential future aerocapture missions. Development of a similar atmospheric model for Venus has recently commenced. An important capability of all of these models is their ability to simulate quasi-random density perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithm, and for thermal systems design. Similarities and differences among these atmospheric models are presented, with emphasis on the recently developed Neptune model and on planned characteristics of the Venus model. Example applications for aerocapture are also presented and illustrated. Recent updates to the Titan atmospheric model are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan.

Justus, C. G.; Duvall, Aleta L.; Keller, Vernon W.

2004-01-01

474

NASA Astrophysics Data System (ADS)

The exciplex pumped alkali laser (XPAL) system has been demonstrated in mixtures of Cs vapor, Ar, with and without ethane, by pumping Cs-Ar atomic collision pairs and subsequent dissociation of diatomic, electronically-excited CsAr molecules (exciplexes or excimers). The blue satellites of the alkali D2 lines provide an advantageous pathway for optically pumping atomic alkali lasers on the principal series (resonance) transitions with broad linewidth (>2 nm) semiconductor diode lasers. Because of the addition of atomic collision pairs and exciplex states, modeling of the XPAL system is more complicated than classic diode pumped alkali laser (DPAL) modeling. The BLAZE-V model is utilized for high-fidelity simulations. BLAZE-V is a time-dependent finite-volume model including transport, thermal, and kinetic effects appropriate for the simulation of a cylindrical closed cell XPAL system. The model is also regularly used for flowing gas laser simulations and is easily adapted for DPAL. High fidelity calculations of pulsed XPAL operation as a function of temperature and pressure are presented along with a theoretical analysis of requirements for optical transparency in XPAL systems. The detailed modeling predicts higher XPAL performance as the rare gas pressure increases, and that higher output powers are obtainable with higher temperature. The theoretical model indicates that the choice of alkali and rare gas mixture can significantly impact the required intensities for optical transparency.

Palla, Andrew D.; Carroll, David L.; Verdeyen, Joseph T.; Heaven, Michael C.

2011-02-01

475

Systematics of fragment angular momentum in low-energy fission of actinides

NASA Astrophysics Data System (ADS)

Independent isomeric yield ratios for 128Sb, 130Sb, 132Sb, 131Te, 133Te, 132I, 134I, 136I, 135Xe and 138Cs in 229Th(n th, f), for 136I in 233U(n th, f) and 239Pu(n th, f), for 138Cs in 235U(n th, f), for 130Sb, 136I and 135Xe in 241Pu(n th, f), for 128Sb, 130Sb, 130Sb, 132Sb, 131Te, 133Te, 134I, 136I, 135Xe and 138Cs in 245Cm(n th, f) and for 128Sb, 130Sb, 132Sb, 136I and 135Xe in 252Cf(S.F.) have been determined using radiochemical and gamma-ray spectrometric techniques. From the isomeric yield ratios, fragment angular momenta ( Jrms) have been deduced using spin-dependent statistical-model analysis. These data along with the literature data in the above fissioning systems as well as in 249Cf(n th, f) show several important features. These features are: (i) Angular momenta for fragments with spherical 50-proton shell, 82-neutron shell and even- Z products are lower compared to the fragments with deformed 88-neutron shell, no shells and odd- Z products indicating the nuclear-structure effects. (ii) Fission fragment Jrms has a nearly inverse correlation with elemental yield in fissioning systems from 230Th ? to 252Cf possibly due to coupling between the collective and intrinsic degrees of freedom. (iii) Although the percentage odd-even effect in the elemental yield decreases from 230Th ? to 250Cf ? and 252Cf, the odd-even fluctuation on fragment Jrms remains nearly the same in spite of the inverse correlation. This possibly indicates the effect of fragment deformation. (iv) Fission-product elemental yield as well as angular momentum have no definite correlation with fissionability since both are decided near the scission point.

Naik, H.; Dange, S. P.; Singh, R. J.; Datta, T.

1995-02-01

476

Autoregressive Model Using Fuzzy C-Regression Model Clustering for Traffic Modeling

NASA Astrophysics Data System (ADS)

A robust traffic modeling is required to perform an effective congestion control for the broad band digital network. An autoregressive model using a fuzzy c-regression model (FCRM) clustering is proposed for a traffic modeling. This is a simpler modeling method than previous methods. The experiments show that the proposed method is more robust for traffic modeling than the previous method.

Tanaka, Fumiaki; Suzuki, Yukinori; Maeda, Junji

477

Expert Models and Modeling Processes Associated with a Computer-Modeling Tool

ERIC Educational Resources Information Center

Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using "think aloud" technique…

Zhang, BaoHui; Liu, Xiufeng; Krajcik, Joseph S.

2006-01-01

478

NASA Technical Reports Server (NTRS)

Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.

Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)

2001-01-01

479

Aviation Safety Simulation Model

NASA Technical Reports Server (NTRS)

The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

Houser, Scott; Yackovetsky, Robert (Technical Monitor)

2001-01-01

480

NSDL National Science Digital Library

The EJS Pendulum Energy Model shows a pendulum and associated energy bar charts. Users can change the initial starting point of the pendulum. The Pendulum Energy Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_middle_school_teacher_PendulumEnergy.jar file will run the program if Java is installed. The user can modify this simulation if EJS is installed by right-clicking within the plot and selecting âOpen Ejs Modelâ from the pop-up menu item.

Christian, Wolfgang; Belloni, Mario; Cox, Anne

2009-06-22

481

NASA Technical Reports Server (NTRS)

Measurements of wind speed, net irradiation, and of air, soil, and dew point temperatures in an orchard at the Rock Springs Agricultural Research Center, as well as topographical and climatological data and a description of the major apple growing regions of Pennsylvania were supplied to the University of Florida for use in running the P-model, freeze prediction program. Results show that the P-model appears to have considerable applicability to conditions in Pennsylvania. Even though modifications may have to be made for use in the fruit growing regions, there are advantages for fruit growers with the model in its present form.

Morrow, C. T. (principal investigator)

1981-01-01

482

NASA Technical Reports Server (NTRS)

The MSIS-86 empirical model of thermospheric temperature, density and composition uses new temperature and composition data from the Dynamics Explorer satellite to improve the representation of polar region morphology over that in the MSIS-83 model. Terms were added or changed to better represent seasonal variations in the polar regions under both quiet and magnetically disturbed conditions. Local time variations in the magnetic activity effect were added. In addition a new species, atomic nitrogen, was added to the previous list of N2, O2, He, O, H, and Ar covered by the model.

Hedin, Alan E.

1987-01-01

483

The MSIS-86 empirical model of thermospheric temperature, density and composition uses new temperature and composition data from the Dynamics Explorer satellite to improve the representation of polar region morphology over that in the MSIS-83 model. Terms were added or changed to better represent seasonal variations in the polar regions under both quiet and magnetically disturbed conditions. Local time variations in the magnetic activity effect were added. In addition to a new species, atomic nitrogen, was added to the previous list of N{sub 2}, O{sub 2}, He, O, H, and Ar covered by the model.

Hedin, A.E. (NASA Goddard Space Flight Center, Greenbelt, MD (United States))

1987-05-01

484

Modeling regional power transfers

The Spot Market Network (SMN) model was used to estimate spot market transactions and prices between various North American Electric Reliability Council (NERC) regions for summer on-peak situations. A preliminary analysis of new or proposed additions to the transmission network was performed. The effects of alternative exempt wholesale generator (EWG) options on spot market transactions and the transmission system are also studied. This paper presents the SMN regional modelling approach and summarizes simulation results. Although the paper focuses on a regional network representation, a discussion of how the SMN model was used to represent a detailed utility-level network is also presented.

Kavicky, J.A.; Veselka, T.D.

1994-03-01

485

Strategic workforce planning model

US Patent & Trademark Office Database

Systems, devices, and methods are provided for workforce planning models. Technologies are described to manage human capital decisions. Decision making models and related tools are described that support the development and implementation of workforce strategies, programs and policies. In one model, resources may be allocated to specific practices (policies, programs, initiatives, organizational culture) used to attract and retain valued employees. Resources may be increased or decreased until the optimal allocation of resources is found that is most likely to enable the achievement of specific goals (e.g., attraction, retention, readiness, and representation).

2013-02-26

486

Computer Modeling Of Atomization

NASA Technical Reports Server (NTRS)

Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.

Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.

1994-01-01

487

NSDL National Science Digital Library

The Noon Shadow model shows the geometry of the shadow cast by a gnomon at noon.Â Users can change the orientation of the gnomon as well as its latitude.Â The height of the gnomon and its shadow length are displayed in Earth radius units. Noon Shadow model is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_astronomy_Noon Shadow.jar file will run the program if Java is installed. You can modify this simulation if you have EJS installed by right-clicking within the plot and selecting "Open EJS Model" from the pop-up menu item.

Christian, Wolfgang; Timberlake, Todd

2010-04-20

488

Stereolithographic models of biopolymers.

Stereolithography (STL) has been used to make plastic models of the solvent accessible surfaces of biopolymers. Models have been made of proteins and proteins bound to DNA and RNA. The STL process uses a laser to photopolymerize a liquid resin. Using the ACES (accurate, clear, epoxy, solid) building technique, parts are made with minimum postcure shrinkage. Protein Data Bank files are converted to STL files that represent the surface topology of the biopolymer as a series of triangles and an index that describes their orientation. The models are useful in teaching biomolecular structure and the principle of docking. They are especially useful to the visually impaired. PMID:10935203

Yourtee, D; Emery, J; Smith, R E; Hodgson, B

2000-02-01

489

Mouse models of medulloblastoma

Medulloblastoma is the most common malignant pediatric brain tumor. Despite its prevalence and importance in pediatric neuro-oncology, the genes and pathways responsible for its initiation, maintenance, and progression remain poorly understood. Genetically engineered mouse models are an essential tool for uncovering the molecular and cellular basis of human diseases, including cancer, and serve a valuable role as preclinical models for testing targeted therapies. In this review, we summarize how such models have been successfully applied to the study of medulloblastoma over the past decade and what we might expect in the coming years.

Wu, Xiaochong; Northcott, Paul A.; Croul, Sidney; Taylor, Michael D.

2011-01-01

490

Atmospheric and Oceanic Modeling

NSDL National Science Digital Library

The numerical methods, formulation and parameterizations used in models of the circulation of the atmosphere and ocean will be described in detail. Widely used numerical methods will be the focus but we will also review emerging concepts and new methods. The numerics underlying a hierarchy of models will be discussed, ranging from simple GFD models to the high-end GCMs. In the context of ocean GCMs, we will describe parameterization of geostrophic eddies, mixing and the surface and bottom boundary layers. In the atmosphere, we will review parameterizations of convection and large scale condensation, the planetary boundary layer and radiative transfer.

Adcroft, Alistair; Emanuel, Kerry A., 1955-; Marshall, John

2007-04-07

491

NASA Astrophysics Data System (ADS)

The ESA Gaia astrometric mission has been designed to create an extraordinarily precise 3D map of about one billion of stars throughout the Galaxy and beyond. The Gaia Universe Model provides the astronomical sources - with their 3D position, velocity, magnitude and physical parameters- required to generate the simulated data for the development and testing of the massive data reduction software. Different types of objects, both galactic and extra-galactic, are provided by the model, including normal stars, several types of variable stars, supernovae, unresolved stars or quasars. A full description of the Gaia Universe Model can be found in Robin et. al 2012 (Astronomy & Astrophysics, 453, A100).

Masana, E.; Luri, X.; Borrachero, R.; Robin, A.; Jordi, C.

2013-05-01

492

Modeling microtubule oscillations

Synchronization of molecular reactions in a macroscopic volume may cause the volume's physical properties to change dynamically and thus reveal much about the reactions. As an example, experimental time series for so-called microtubule oscillations are analyzed in terms of a minimal model for this complex polymerization-depolymerization cycle. The model reproduces well the qualitatively different time series that result from different experimental conditions, and illuminates the role and importance of individual processes in the cycle. Simple experiments are suggested that can further test and define the model and the polymer's reaction cycle.

Jobs, Elmar [Hoechstleistungsrechenzentrum, Forschungszentrum Juelich GmbH, D-52425 Juelich (Germany); Wolf, Dietrich E. [Theoretical Physics FB10, Gerhard-Mercator-University, D-47048 Duisburg (Germany); Flyvbjerg, Henrik [Condensed Matter Physics and Chemistry Department, Risoe Rise National Laboratory, DK-4000 Roskilde (Denmark); The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Oe (Denmark)

1999-10-05

493

Vector Addition Patterns Model

NSDL National Science Digital Library

The Vector Addition Patterns model illustrates the tail-to-tip method of adding vectors. The table at the bottom shows the components and lengths of the vectors. You can also rotate the vectors and trace out some interesting patterns. The Vector Addition Patterns model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_vector_addition_patterns.jar file will run the program if Java is installed.

Duffy, Andrew