Science.gov

Sample records for airshed model uam

  1. URBAN AIRSHED MODEL (UAM)

    EPA Science Inventory

    The Urban Airshed Model (UAM) is an urban scale, three-dimensional, grid type, numerical simulation model. The model incorporates a condensed photochemical kinetics mechanism for urban atmospheres. The UAM is designed for computing ozone (O3) concentrations under short-term, epis...

  2. User's guide for the Urban Airshed Model. Volume 5. Description and operation of the ROM - UAM interface program system

    SciTech Connect

    Tang, R.T.; Gerry, S.C.; Newsome, J.S.; Van Meter, A.R.; Wayland, R.A.

    1990-06-01

    The user's guide for the Urban Airshed Model (UAM) is divided into five volumes. Volume V describes the ROM-UAM interface program system, a software package that can be used to generate UAM input files from inputs and outputs provided by the EPA Regional Oxidant Model (ROM).

  3. Sensitivity of Urban Airshed Model (UAM-IV) calculated air pollutant concentrations to the vertical diffusion parameterization during convective meteorological situations

    SciTech Connect

    Nowacki, P.; Samson, P.J.; Sillman, S.

    1996-10-01

    It is shown that Urban Airshed Model (UAM-IV) calculated air pollutant concentrations during photochemical smog episodes in Atlanta, Georgia, depend strongly on the numerical parameterization of the daytime vertical diffusivity. Results found suggest that vertical mixing is overestimated by the UAM-IV during unstable daytime conditions, as calculated vertical diffusivity values exceed measured and comparable literature values. Although deviations between measured and UAM-IV calculated air pollutant concentrations may only in part be due the UAM-IV diffusivity parameterization, results indicate the large error potential in vertical diffusivity parameterization. Easily implemented enhancements to UAM-IV algorithms are proposed, thus improving UAM-IV modeling performance during unstable stratification. 38 refs., 14 figs., 1 tab.

  4. SAI (SYSTEMS APPLICATIONS, INCORPORATED) URBAN AIRSHED MODEL

    EPA Science Inventory

    The magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air quality simulation model that is well suited for predicting the spatial and temporal distribution of photoch...

  5. DEVELOPMENT OF AN IMPROVED URBAN AIRSHED MODELING SYSTEM

    EPA Science Inventory

    A research and development effort to improve certain physical processes simulated in the Urban Airshed Model (UAM) processor and model programs, and to update the computer software is described. he UAM is an Eulerian photochemical grid model designed to simulate the relevant phys...

  6. SAI (Systems Applications, Incorporated) Urban Airshed Model. Model

    SciTech Connect

    Schere, K.L.

    1985-06-01

    This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to several hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.

  7. Study using a three-dimensional photochemical smog formation model under conditions of complex flow: Application of the Urban Airshed Model to the Tokyo Metropolitan Area. Rept. for Jan 85-Jan 91

    SciTech Connect

    Wakamatsu, S.; Schere, K.L.

    1991-03-01

    The purpose of the study is to evaluate the Urban Airshed Model (UAM), a three-dimensional photochemical urban air quality simulation model, using field observations from the Tokyo Metropolitan Area. Emphasis was placed on the photochemical smog formation mechanism under stagnant meteorological conditions. The UAM produced reasonable calculated results for the diurnal, areal and vertical distributions of O3 concentrations covering the Tokyo Metropolitan Area. The role and significance of the previous day's secondary pollutants on O3 formation mechanisms were also investigated. During the night time, high values of secondary pollutant concentrations were predicted above the radiation inversion layer. These aged pollutants were then entrained into the mixing layer during the day in accordance with the elevation of the lid. These characteristic features were also observed in the field study.

  8. SAI (SYSTEMS APPLICATIONS, INCORPORATED) AIRSHED MODEL OPERATIONS MANUALS. VOLUME 2. SYSTEMS MANUAL

    EPA Science Inventory

    This report describes the Systems Applications, Inc. (SAI) Airshed Model System from a programmer's point of view. Included are discussions of all subroutines and how they fit together, run-time core allocation techniques, internal methods of segment handling using secondary stor...

  9. SENSITIVITY ANALYSIS OF A NESTED OZONE AIR QUALITY MODEL

    EPA Science Inventory

    A series of Urban Airshed Model (UAM) simulations were performed using inputs derived from Regional Oxidant Model (ROM) data files. The gridded ROM results employed in the UAM simulations included concentrations for specifying initial and boundary conditions, wind fields, other m...

  10. USING THE REGIONAL ACID DEPOSITION MODEL TO DETERMINE THE NITROGEN DEPOSITION AIRSHED OF THE CHESAPEAKE BAY WATERSHED

    EPA Science Inventory

    The Regional Acid Deposition Model, RADM, an advanced Eulerian model, is used to develop an estimate of the primary airshed of nitrogen oxide (NOx) emissions that is contributing nitrogen deposition to the Chesapeake Bay watershed. rief description of RADM together with a summary...

  11. PHOTOCHEMICAL URBAN AIRSHED MODELING USING DIAGNOSTIC AND DYNAMIC METEOROLOGICAL FIELDS

    EPA Science Inventory

    Spatial pollutant patterns and peak concentrations are strongly influenced by meteorological parameters. herefore, accurate hourly, gridded meteorological data sets are crucial inputs for photochemical modeling. n effort has been underway to apply both diagnostic and dynamic mete...

  12. Longitudinal Structure of the 135.6 NM Ionospheric Emission as Modeled by the Cmam-Uam Model System

    NASA Astrophysics Data System (ADS)

    Martynenko, O. V.; Fomichev, V. I.; Semeniuk, K.; Beagley, S. R.; Ward, W. E.; McConnell, J. C.

    2012-12-01

    The development of whole atmosphere models extending from the surface up to the ionosphere is a new field which is recently attracting more attention of the geophysical community. One of the primary goals of whole atmosphere modeling is to determine how lower atmosphere dynamical variability would propagate into and affect the upper atmosphere and ionosphere. As the first step toward developing a whole atmosphere model in Canada, the extended Canadian Middle Atmosphere Model (CMAM) has been coupled with Murmansk's ionospheric Upper Atmosphere Model (UAM). To find out whether this model system is capable to capture the effect of the upward penetrating waves generated in the lower atmosphere on the ionospheric structure, it was used to reproduce the longitudinal structure of the 135.6 nm ionospheric emission observed by the IMAGE-FUV imager. These observations show four emission peaks in longitude in the equatorial region at 8 pm local time (and hence 4 peaks in electron/ion density), a pattern which cannot be obtained with the ionospheric model alone. Results obtained with the CMAM-UAM model system show good agreement with the observations. Analysis of the model results also suggests that the main mechanism for generating this longitudinal structure of the ionospheric emission is a modification of the ionospheric electric field in the E-region caused by differences in the diurnal evolution of the zonal wind in different longitudinal sectors due to waves penetrating from the lower atmosphere.

  13. NESTED GRID MODELING APPROACH FOR ASSESSING URBAN OZONE AIR QUALITY

    EPA Science Inventory

    This paper describes an effort to interface the modeled concentrations and other outputs of the Regional Oxidant Model (ROM) as an alternative set of input files to apply in Urban Airshed Model (UAM) simulations. ive different days exhibiting high ozone concentrations during the ...

  14. Urban airshed modeling of air quality impacts of alternative transportation fuel use in Los Angeles and Atlanta

    SciTech Connect

    1997-12-01

    The main objective of NREL in supporting this study is to determine the relative air quality impact of the use of compressed natural gas (CNG) as an alternative transportation fuel when compared to low Reid vapor pressure (RVP) gasoline and reformulated gasoline (RFG). A table lists the criteria, air toxic, and greenhouse gas pollutants for which emissions were estimated for the alternative fuel scenarios. Air quality impacts were then estimated by performing photochemical modeling of the alternative fuel scenarios using the Urban Airshed Model Version 6.21 and the Carbon Bond Mechanism Version IV (CBM-IV) (Geary et al., 1988) Using this model, the authors examined the formation and transport of ozone under alternative fuel strategies for motor vehicle transportation sources for the year 2007. Photochemical modeling was performed for modeling domains in Los Angeles, California, and Atlanta, Georgia.

  15. Evaluating spatial outliers and integrating temporal data in air pollution models for the Detroit-Windsor airshed

    NASA Astrophysics Data System (ADS)

    O'Leary, Brendan F.

    The heterogeneous nature of urban air complicates human exposure estimates and creates a need for accurate, highly detailed spatiotemporal air contaminant models. The study expands on previous investigations by the Geospatial Determinants of Health Outcomes Consortium that examined relationships between air pollutant distributions and asthma exacerbations. Two approaches, the removal of spatial data outliers and the integration of spatial and temporal data, were used to refine air quality models in the Detroit and Windsor international airshed. The evaluation of associations between the resulting air quality models and asthma exacerbations in Detroit and Windsor revealed weaker correlations with spatial outliers removed but improved correlations with the addition of temporal data. Recommendations for future work include increasing the spatial and temporal resolution of the asthma datasets and incorporating Windsor NAPS data through temporal scaling to help confirm the findings of the Detroit temporal scaling.

  16. SPATIAL ANALYSIS OF AIR POLLUTION AND DEVELOPMENT OF A LAND-USE REGRESSION ( LUR ) MODEL IN AN URBAN AIRSHED

    EPA Science Inventory

    The Detroit Children's Health Study is an epidemiologic study examining associations between chronic ambient environmental exposures to gaseous air pollutants and respiratory health outcomes among elementary school-age children in an urban airshed. The exposure component of this...

  17. Modeling the effects of VOC/NOx emissions on ozone synthesis in the cascadia airshed of the Pacific Northwest.

    PubMed

    Barna, M; Lamb, B; Westberg, H

    2001-07-01

    A modeling system consisting of MM5, Calmet, and Calgrid was used to investigate the sensitivity of anthropogenic volatile organic compound (VOC) and oxides of nitrogen (NOx) reductions on ozone formation within the Cascadia airshed of the Pacific Northwest. An ozone episode that occurred on July 11-14, 1996, was evaluated. During this event, high ozone levels were recorded at monitors downwind of Seattle, WA, and Portland, OR, with one monitor exceeding the 1 hr/120 ppb National Ambient Air Quality Standard (at 148 ppb), and six monitors above the proposed 8 hr/80 ppb standard (at 82-130 ppb). For this particular case, significant emissions reductions, between 25 and 75%, would be required to decrease peak ozone concentrations to desired levels. Reductions in VOC emissions alone, or a combination of reduced VOC and NOx emissions, were generally found to be most effective; reducing NOx emissions alone resulted in increased ozone in the Seattle area. When only VOC emissions were curtailed, ozone reductions occurred in the immediate vicinity of densely populated areas, while NOx reductions resulted in more widespread ozone reductions. PMID:15658221

  18. Application of U.S. EPA's Models-3 prototype to two airsheds in the United States

    SciTech Connect

    Pai, P.; Vijayaraghavan, K.; Lohman, K.; Seigneur, C.; Hansen, A.

    1999-07-01

    The Multiscale Air Quality Simulation Platform (MAQSIP) is a prototype for US EPA's Models-3. Using the modular software design of MAQSIP, an air quality model can be constructed by selecting science modules from various options. The authors present results from application of MAQSIP to the Los Angeles basin. The application to the Los Angeles basin consisted of simulating the August 1987 Southern California Air Quality Study (SCAQS) episode and benchmarking the performance of MAQSIP against that of another comprehensive air quality model, SAQM. The meteorology to drive the two models was generated by the mesoscale model MM5. The input emissions, initial conditions, and boundary conditions were mostly identical for both models. The science modules of MAQSIP were chosen so as to emulate SAQM. Therefore, differences in the estimates of the two models are primarily due to differences in the software design. After the benchmarking exercise, the authors applied the MAQSIP/MM5 system to the 1995 SOS Nashville/Middle Tennessee database. Evaluation of the modeling system is currently ongoing and consists of comparisons of model estimates against corresponding surface observations from a network of sites and upper air observations from aircraft.

  19. Retrieval of the single scattering albedo in the El Paso-Juarez Airshed using the TUV model and a UV-MFRSR radiometer

    NASA Astrophysics Data System (ADS)

    Medina, Richard; Fitzgerald, Rosa M.; Min, Qilong

    2012-01-01

    A methodology to retrieve Single Scattering Albedo (SSA) values employing the ratio of Direct to Diffuse Irradiances (DDR) is used and applied to the El Paso-Juarez Airshed, a challenging region where air masses interact. The TUV model was used to obtain the calculated DDR irradiances, and the experimental irradiances were obtained from a UV-MFRSR instrument located in the city of El Paso, Texas. The wavelengths used were 332 nm and 368 nm. The retrieved SSA values at both 332 nm and 368 nm were higher in a lightly polluted day (0.66-0.81 at 332 nm, and 0.61 to 0.80 at 368 nm) than in a heavier polluted day (0.56-0.70 at 332 nm and 0.53-0.66 at 368 nm). A sensitivity study of the ground albedo and the asymmetry parameter was performed, which indicated that the variation of the asymmetry parameter is a secondary effect in the retrievals of SSA. In addition, the variation of SSA values during the day was also analyzed for the El Paso-Juarez Airshed and linked to the flow of air masses into the region using HYSPLIT trajectories. A presence of absorptive aerosols was observed during the late morning and the middle of the day. This methodology can be applied in any area, and is particularly useful for cities that experience episodes of high PM concentrations.

  20. Meteorological and photochemical modeling of large-scale albedo changes in the South Coast Air Basin

    SciTech Connect

    Tran, K.T.; Mirabella, V.A.

    1998-12-31

    The effectiveness of large-scale surface albedo changes as an ozone control strategy is investigated. These albedo changes are part of the Cool Communities strategy that calls for the use of lighter colored roofing and paving materials as well as an increase in tree planting. The advanced mesoscale model MM5 was used to analyze the associated effects on ambient temperature, mixing depth and winds. The MM5 model was modified to accept surface properties derived from a satellite-based land use database. Preprocessors were also developed to allow a research-oriented model such as MM5 to be user friendly and amenable to practical, routine air quality modeling applications. Changes in ozone air quality are analyzed with the Urban Airshed Model (UAM). Results of the MM5/UAM simulations of the SCAQS August 26--28, 1987 ozone episode are presented and compared to those obtained with the CSUMM/UAM models.

  1. Continued research in mesoscale air-pollution simulation modeling. Volume 5. Refinements in numerical analysis, transport, chemistry, and pollutant removal. Final report, October 1979-July 1982. [AIRSHED model and SHASTA method

    SciTech Connect

    Killus, J.P.; Meyer, J.P.; Durran, D.R.; Anderson, G.E.; Jerskey, T.N.

    1984-12-01

    Two numerical integration methods identified as having features that provided significant improvements over the technique originally embedded in the Airshed Model have been evaluated. Of particular concern was the treatment of horizontal transport. In the evaluation of the schemes, the predictions resulting from the SHASTA method differed no more than about 20 percent from those generated using the original method. In addition, SHASTA posseses the better blend of computational speed and minimum error propagation. An objective analysis technique for obtaining a gridded, time-varying, fully three-dimensional wind field for the Airshed Model from available measurements has been developed. The technique accounts for urban heat island effects and should be directly applicable to relatively flat areas. A 42-step chemical kinetic mechanism for describing the chemical transformations of organics, NOx, O/sub 3/, and SO/sub 2/ and the production of sulfate, nitrate, and organic aerosols is presented. A unique feature of this mechanism is the explicit consideration given to the carbon bonds making up each organic molecule. An algorithm that relates the effective deposition velocity to the stability of the atmosphere and the type of surface has been implemented in the Airshed Model. Surface removal processes may significantly affect the concentrations of O/sub 3/, NO..nu.., and SO/sub 2/.

  2. Air quality modeling of selected aromatic and non-aromatic air toxics in the Houston urban and industrial airshed

    NASA Astrophysics Data System (ADS)

    Coarfa, Violeta Florentina

    2007-12-01

    Air toxics, also called hazardous air pollutants (HAPs), pose a serious threat to human health and the environment. Their study is important in the Houston area, where point sources, mostly located along the Ship Channel, mobile and area sources contribute to large emissions of such toxic pollutants. Previous studies carried out in this area found dangerous levels of different HAPs in the atmosphere. This thesis presents several studies that were performed for the aromatic and non-aromatic air toxics in the HGA. For these studies we developed several tools: (1) a refined chemical mechanism, which explicitly represents 18 aromatic air toxics that were lumped under two model species by the previous version, based on their reactivity with the hydroxyl radical; (2) an engineering version of an existing air toxics photochemical model that enables us to perform much faster long-term simulations compared to the original model, that leads to a 8--9 times improvement in the running time across different computing platforms; (3) a combined emission inventory based on the available emission databases. Using the developed tools, we quantified the mobile source impact on a few selected air toxics, and analyzed the temporal and spatial variation of selected aromatic and non-aromatic air toxics in a few regions within the Houston area; these regions were characterized by different emissions and environmental conditions.

  3. Development of a modified factor analysis/multiple regression model to apportion suspended particulate matter in a complex urban airshed

    NASA Astrophysics Data System (ADS)

    Morandi, Maria T.; Daisey, Joan M.; Lioy, Paul J.

    A modified factor analysis/multiple regression (FA/MR) receptor-oriented source apportionment model has been developed which permits application of FA/MR statistical methods when some of the tracers are not unique to an individual source type. The new method uses factor and regression analyses to apportion non-unique tracer ambient concentrations in situations where there are unique tracers for all sources contributing to the non-unique tracer except one, and ascribes the residual concentration to that source. This value is then used as the source tracer in the final FA/MR apportionment model for ambient paniculate matter. In addition, factor analyses results are complemented with examination of regression residuals in order to optimize the number of identifiable sources. The new method has been applied to identify and apportion the sources of inhalable particulate matter (IPM; D5015 μm), Pb and Fe at a site in Newark, NJ. The model indicated that sulfate/secondary aerosol contributed an average of 25.8 μ -3 (48%) to IPM concentrations, followed by soil resuspension (8.2 μ -3 or 15%), paint spraying/paint pigment (6.7/gmm -3or 13%), fuel oil burning/space heating (4.3 μ -3 or 8 %), industrial emissions (3.6 μm -3 or 7 %) and motor vehicle exhaust (2.7 μ -3 or 15 %). Contributions to ambient Pb concentrations were: motor vehicle exhaust (0.16μm -3or 36%), soil resuspension (0.10μm -3 or 24%), fuel oil burning/space heating (0.08μm -3or 18%), industrial emissions (0.07 μ -3 or 17 %), paint spraying/paint pigment (0.036 μm -3or 9 %) and zinc related sources (0.022 μ -3 or 5 %). Contributions to ambient Fe concentrations were: soil resuspension (0.43μ -3or 51%), paint spraying/paint pigment (0.28 μm -3or 33 %) and industrial emissions (0.15 μ -3or 18 %). The models were validated by comparing partial source profiles calculated from modeling results with the corresponding published source emissions composition.

  4. Georgia Basin-Puget Sound Airshed Characterization Report 2014

    EPA Science Inventory

    The Georgia Basin - Puget Sound Airshed Characterization Report, 2012 was undertaken to characterize the air quality within the Georgia Basin/Puget Sound region,a vibrant, rapidly growing, urbanized area of the Pacific Northwest. The Georgia Basin - Puget Sound Airshed Characteri...

  5. Using MFACE as input in the UAM to specify the MIT dynamics

    NASA Astrophysics Data System (ADS)

    Prokhorov, B. E.; Förster, M.; He, M.; Namgaladze, A. A.; Holschneider, M.

    2014-08-01

    The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.

  6. The use of an atmospheric dispersion model to determine influence regions in the Prince George, B.C. airshed from the burning of open wood waste piles.

    PubMed

    Ainslie, B; Jackson, P L

    2009-06-01

    A means of determining air emission source regions adversely influencing the city of Prince George, British Columbia, Canada from potential burning of isolated piles of mountain pine beetle-killed lodge pole pine is presented. The analysis uses the CALPUFF atmospheric dispersion model to identify safe burning regions based on atmospheric stability and wind direction. Model results show that the location and extent of influence regions is sensitive to wind speed, wind direction, atmospheric stability and a threshold used to quantify excessive concentrations. A concentration threshold based on the Canada Wide PM(2.5) Standard is used to delineate the influence regions while Environment Canada's (EC) daily ventilation index (VI) is used to quantify local atmospheric stability. Results from the analysis, to be used by air quality meteorologists in assessing daily requests for burning permits, are presented as a series of maps delineating acceptable burning locations for sources placed at various distances from the city center and under different ventilation conditions. The results show that no burning should be allowed within 10 km of the city center; under poor ventilation conditions, no burning should be allowed within 20 km of the city center; under good ventilation conditions, burning can be allowed within 10-15 km of the city center; under good to fair ventilation conditions, burning can be allowed beyond 15 km of the city center; and if the wind direction can be reliably forecast, burning can be allowed between 5 and 10 km downwind of the city center under good ventilation conditions. PMID:19303193

  7. Model simulation of meteorology and air quality during the summer PUMA intensive measurement campaign in the UK West Midlands conurbation.

    PubMed

    Baggott, Sarah; Cai, Xiaoming; McGregor, Glenn; Harrison, Roy M

    2006-05-01

    The Regional Atmospheric Modeling System (RAMS) and Urban Airshed Model (UAM IV) have been implemented for prediction of air pollutant concentrations within the West Midlands conurbation of the United Kingdom. The modelling results for wind speed, direction and temperature are in reasonable agreement with observations for two stations, one in a rural area and the other in an urban area. Predictions of surface temperature are generally good for both stations, but the results suggest that the quality of temperature prediction is sensitive to whether cloud cover is reproduced reliably by the model. Wind direction is captured very well by the model, while wind speed is generally overestimated. The air pollution climate of the UK West Midlands is very different to those for which the UAM model was primarily developed, and the methods used to overcome these limitations are described. The model shows a tendency towards under-prediction of primary pollutant (NOx and CO) concentrations, but with suitable attention to boundary conditions and vertical profiles gives fairly good predictions of ozone concentrations. Hourly updating of chemical concentration boundary conditions yields the best results, with input of vertical profiles desirable. The model seriously underpredicts NO2/NO ratios within the urban area and this appears to relate to inadequate production of peroxy radicals. Overall, the chemical reactivity predicted by the model appears to fall well below that occurring in the atmosphere. PMID:16266739

  8. Atmospheric speciation of mercury in two contrasting Southeastern US airsheds

    NASA Astrophysics Data System (ADS)

    Gabriel, Mark C.; Williamson, Derek G.; Brooks, Steve; Lindberg, Steve

    Simultaneous measurement of gaseous elemental, reactive gaseous, and fine particulate mercury took place in Tuscaloosa AL, (urban airshed) and Cove Mountain, TN (non-urban airshed) during the summers of 2002 and 2003. The objective of this research was to (1) summarize the temporal distribution of each mercury specie at each site and compare to other speciation data sets developed by other researchers and (2) provide insight into urban and non-urban mercury speciation effects using various statistical methods. Average specie concentrations were as follows: 4.05 ng m -3 (GEM), 13.6 pg m -3 (RGM), 16.4 pg m -3 (Hg-p) for Tuscaloosa; 3.20 ng m -3 (GEM), 13.6 pg m -3 (RGM), 9.73 pg m -3 (Hg-p) for Cove Mountain. As a result of urban airshed impacts, short periods of high concentration for all mercury species was common in Tuscaloosa. At Cove Mountain a consistent mid-day rise and evening drop for mercury species was found. This pattern was primarily the result of un-impacted physical boundary layer movement, although, other potential impacts were ambient photochemistry and air-surface exchange of mercury. Meteorological parameters that are known to heavily impact mercury speciation were similar for the study period for Tuscaloosa and Cove Mountain except for wind speed (m s -1), which was higher at Cove Mountain. For both sites statistically significant ( p<0.0001), inverse relationships existed between wind speed and Hg 0 concentration. A weaker windspeed-Hg 0 correlation existed for Tuscaloosa. By analyzing Hg concentration—wind speed magnitude change at both sites it was found that wind speed at Cove Mountain had a greater influence on Hg 0 concentration variability than Tuscaloosa by a factor of 3. Using various statistical tests, we concluded that the nature of Tuscaloosa's atmospheric mercury speciation was the result of typical urban airshed impacts. Cove Mountain showed atmospheric mercury speciation characteristics indicative of a non-urban area along with

  9. Sources of polycyclic aromatic hydrocarbons to the Hudson River Airshed

    NASA Astrophysics Data System (ADS)

    Lee, Jong Hoon; Gigliotti, Cari L.; Offenberg, John H.; Eisenreich, Steven J.; Turpin, Barbara J.

    2004-11-01

    Sources of polycyclic aromatic hydrocarbons (PAHs) to the Hudson River Estuary Airshed were investigated using positive matrix factorization (PMF). A three-city dataset was used to obtain common factor profiles. The contributions of each factor on each sampling day and site were then determined, and a sensitivity analysis was conducted. A stable eight-factor solution was identified. PMF was able to identify a factor associated with air-surface exchange. This factor contains low-molecular weight PAHs and was a dominant contributor to the measured PAHs concentrations. Factors linked to motor vehicle use (diesel and gasoline vehicle emissions and evaporative/uncombusted petroleum) and natural gas combustion were also major contributors. Motor vehicle combustion and oil combustion factors were the predominant contributors to particle-phase PAHs, while natural gas combustion, air-surface exchange, and evaporative/uncombusted petroleum factors made substantial contributions to gas-phase PAH concentrations. In contrast to fine particulate matter (PM2.5), which is dominated by regional transport, spatial variations in PAH concentrations suggest that PAH concentrations in the Hudson River Estuary Airshed are dominated by sources within the New York-New Jersey urban-industrial complex.

  10. Air Quality in the Puebla-Tlaxcala Airshed in Mexico during April 2009

    NASA Astrophysics Data System (ADS)

    Ruiz Suarez, L. G.; Torres Jardón, R.; Torres Jaramillo, J. A.; Barrera, H.; Castro, T.; Mar Morales, B. E.; García Reynoso, J. A.; Molina, L. T.

    2012-04-01

    East of the Mexico Megacity, is the metropolitan area of Puebla-Tlaxcala which is reproducing the same patterns of urban sprawl as in the Mexico City Metropolitan Area. Is an area of high industrial density, the fragmented urban sprawl boost the use of particular cars in detrimental of public transport use. Emissions inventories reflect this fact; they also show a considerable use of biomass energy in households and small using a set of industries and service business. In April 2009 we carried out a preliminary field campaign in the basin, we deployed three mobile units, one in the north, in a site connecting with the valley of Mexico basin, one in the south where it may connect with the Cuautla-Cuernavaca Airshed and one in a receptor site to the Puebla Metropolitan Area. In addition to the available data from local air quality network within the City of Puebla. Analysis of the 2009 data show a complex flow pattern induced by the Popocateptl and Iztaccihuatl volcanoes to the west and La Malinche volcano to the east. Excess NOx emissions in the urban and industrial core lead to very low ozone levels within but high ozone concentrations are observed in the peri-urban and rural areas, exceeding the Mexican Air Quality Standards. In our presentation we will describe and explain these observations and will describe a field campaign to be carried out in March-April 2012 aiming to better document the air quality in the Puebla-Tlaxcala Airshed. Hybrid observation-model maps for ozone critical levels show the population exposed to exeedences to the official standards. AOT40 maps also show that crops and forests in the region are exposed to unhealthy ozone levels. These results add to those from MILAGRO and CARIEM field campaigns on the regional scale of the air quality issues in central Mexico. A point is made on the need to update the Mexicp Air Quality Standard for ozone.

  11. Compilation and evaluation of a Paso del Norte emission inventory for use in photochemical dispersion modeling

    SciTech Connect

    Haste, T.L.; Kumar, N.; Chinkin, L.R.; Roberts, P.T.; Saeger, M.; Mulligan, S.; Yarbrough, J.

    1999-07-01

    Emission inventories are routinely used for planning purposes and as input to comprehensive photochemical air quality models. Photochemical model performance and the development of an effective control strategy are predicated on the accuracy of an underlying emission inventory. The purpose of this study was to compile an ozone precursor emission inventory for the El Paso/Ciudad Juarez/Southern Dona Ana region; generate a spatially and temporally resolved, speciated emission inventory; and evaluate the accuracy and representativeness of the inventory. Existing point, area, and mobile source emissions data were obtained from local government agencies. Emissions were spatially and temporally allocated to a gridded domain using region-specific demographic and land cover information. The inventory was processed using the US Environmental Protection Agency (EPA) recommended Urban Airshed Model Emissions Preprocessor System 2.0 (UAM-EPS 2.0) which generates emissions files that can be directly used as input to the Urban Airshed Model. An evaluation of the emission inventory was then performed by comparing inventory non-methane hydrocarbon (NMHC)/NO{sub x} and CO/NO{sub x} ratios to ambient ratios using air quality data collected during the 1996 Paso del Norte Ozone Study. Detailed NMHC species comparisons were also made in order to investigate the relative composition of individual hydrocarbon species in the emission inventory and in the ambient data. This initial emission inventory is expected to undergo substantial revisions during the upcoming photochemical modeling phase of the effort to better understand and improve the air quality of the El Paso/Ciudad Juarez/Southern Dona Ana region.

  12. 1993 Commercial marine vessel emissions within the COAST modeling domain

    SciTech Connect

    Neece, J.D.; MacKay, J.D.; Smith, J.H.

    1996-12-31

    The Coastal Oxidant Assessment for Southeast Texas (COAST) is a major study to support the ozone State Implementation Plan (SIP) development for the Texas nonattainment areas along the Gulf of Mexico. In part, the COAST study is providing data for an Urban Airshed Modeling (UAM) project being conducted by the Texas Natural Resource Conservation Commission (TNRCC). The modeling domain contains almost the entire Texas coast, from near Mexico into Louisiana. The domain is home to approximately 100 ocean-going ships, 50 harbor vessels, 600 canal vessels, and numerous fishing vessels. In addition, hundreds of foreign vessels call at Texas ports. Both large-bore engines used to power ocean-going ships and smaller diesel engines used to power harbor, canal, and fishing vessels emit very large quantities of nitrogen oxides (NO{sub x}) and significant quantities of volatile organic compounds (VOC) and carbon monoxide (CO). As a result, the accuracy of the marine vessel emissions inventory may impact both the accuracy of UAM predictions and the effectiveness of the resulting control strategy or strategies selected in the ozone SIP process. We updated the method of Booz-Allen, {open_quotes}Commercial Marine Vessel Contributions to Emission Inventories,{close_quotes} with 1993 data, extended it to cover the entire COAST domain, and broke out canal vessel emissions separately from harbor vessels. We used waterway traffic counts and vessel census data to estimate emissions from ocean-going ships. We used census data and typical operating schedules to estimate harbor and canal vessel emissions. Then we allocated these emissions to each county by waterway mileage navigable by each class of vessel. Finally, we allocated the county emissions by class to specific waterways and utilized tileplots as an visual quality assurance tool.

  13. Modeling ozone episodes in the Baltimore-Washington region

    NASA Technical Reports Server (NTRS)

    Ryan, William F.

    1994-01-01

    Surface ozone (O3) concentrations in excess of the National Ambient Air Quality Standard (NAAQS) continue to occur in metropolitan areas in the United States despite efforts to control emissions of O3 precursors. Future O3 control strategies will be based on results from modeling efforts that have just begun in many areas. Two initial questions that arise are model sensitivity to domain-specific conditions and the selection of episodes for model evaluation and control strategy development. For the Baltimore-Washington region (B-W), the presence of the Chesapeake Bay introduces a number of issues relevant to model sensitivity. In this paper, the specific questions of the determination of model volume (mixing height) for the Urban Airshed Model (UAM) is discussed and various alternative methods compared. For the latter question, several analytic approaches, Cluster Analysis and classification and Regression Tree (CART) analysis are undertaken to determine meteorological conditions associated with severe O3 events in the B-W domain.

  14. Formaldehyde and its relation to CO, PAN, and SO2 in the Houston-Galveston airshed

    NASA Astrophysics Data System (ADS)

    Rappenglück, B.; Dasgupta, P. K.; Leuchner, M.; Li, Q.; Luke, W.

    2010-03-01

    The Houston-Galveston Airshed (HGA) is one of the major metropolitan areas in the US that is classified as a nonattainment area of federal ozone standards. Formaldehyde (HCHO) is a key species in understanding ozone related air pollution; some of the highest HCHO concentrations in North America have been reported for the HGA. We report on HCHO measurements in the HGA from summer 2006. Among several sites, maximum HCHO mixing ratios were observed in the Houston Ship Channel (HSC), a region with a very high density of industrial/petrochemical operations. HCHO levels at the Moody Tower (MT) site close to downtown were dependent on the wind direction: southerly maritime winds brought in background levels (0.5-1 ppbv) while trajectories originating in the HSC resulted in high HCHO (up to 31.5 ppbv). Based on the best multiparametric linear regression model fit, the HCHO levels at the MT site can be accounted for as follows: 38.5±12.3% from primary vehicular emissions (using CO as an index of vehicular emission), 24.1±17.7% formed photochemically (using peroxyacetic nitric anhydride (PAN) as an index of photochemical activity) and 8.9±11.2% from industrial emissions (using SO2 as an index of industrial emissions). The balance 28.5±12.7% constituted the residual which cannot be easily ascribed to the above categories and/or which is transported into the HGA. The CO related HCHO fraction is dominant during the morning rush hour (06:00-09:00 h, all times are given in CDT); on a carbon basis, HCHO emissions are up to 0.7% of the CO emissions. The SO2 related HCHO fraction is significant between 09:00-12:00 h. After 12:00 h HCHO is largely formed through secondary processes. The HCHO/PAN ratios are dependent on the SO2 levels. The SO2 related HCHO fraction at the downtown site originates in the ship channel. Aside from traffic-related primary HCHO emissions, HCHO of industrial origin serves as an appreciable source for OH in the morning.

  15. Formaldehyde and its relation to CO, PAN, and SO2 in the Houston-Galveston airshed

    NASA Astrophysics Data System (ADS)

    Rappenglück, B.; Dasgupta, P. K.; Leuchner, M.; Li, Q.; Luke, W.

    2009-11-01

    The Houston-Galveston Airshed (HGA) is one of the major metropolitan areas in the US that is classified as a nonattainment area of Federal ozone standards. Formaldehyde (HCHO) is a key species in understanding ozone related air pollution; some of the highest HCHO concentrations in North America have been reported for the HGA. We report on HCHO measurements in the HGA from summer 2006. Among several sites, maximum HCHO mixing ratios were observed in the Houston Ship Channel (HSC), a region with a very high density of industrial/petrochemical operations. HCHO levels at the Moody Tower (MT) site close to downtown were dependent on the wind direction: southerly maritime winds brought in background levels (0.5-1 ppbv) while trajectories originating in the HSC resulted in high HCH (up to 31.5 ppbv). Based on the best multiparametric linear regression model fit, the HCHO levels at the MT site can be accounted for as follows: 38.5±12.3% from primary vehicular emissions (using CO as an index of vehicular emission), 24.1±17.7% formed photochemically (using peroxyacetic nitric anhydride (PAN) as an index of photochemical activity) and 8.9±11.2% from industrial emissions (using SO2 as an index of industrial emissions). The balance 28.5±12.7% constituted the residual which cannot be easily ascribed to the above categories and/or which is transported into the HGA. The CO related HCHO fraction is dominant during the morning rush hour (06:00-09:00 h, all times are given in CDT); on a carbon basis, HCHO emissions are up to 0.7% of the CO emissions. The SO2 related HCHO fraction is significant between 09:00-12:00 h. After 12:00 h HCHO is largely formed through secondary processes. The HCHO/PAN ratios are dependent on the SO2 levels. The SO2 related HCHO fraction at the downtown site originates in the ship channel. Aside from traffic-related primary HCHO emissions, HCHO of industrial origin serves as an appreciable source for OH in the morning.

  16. Potential impacts of Title I nonattainment on the electric power industry: A Chicago case study (Phase 2)

    SciTech Connect

    Fernau, M.E.; Makofske, W.J.; South, D.W.

    1993-06-01

    This study uses version IV of the Urban Airshed Model (UAM-IV) to examine the potential impacts of Title I (nonattainment) and Title IV (acid rain) of the Clean Air Act Amendments of 1990 (CAAA) on the utility industry. The UAM is run for a grid that covers the Commonwealth Edison Power Pool and encompasses the greater Chicago area and surrounding rural areas. Meteorological conditions are selected from an ozone (O{sub 3}) episode on July 5 and 6, 1988.

  17. Understanding the Evolution of Organic Aerosols in the Mexico City Airshed in 2002, 2003 and 2006 using Positive Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Ulbrich, I. M.; Dzepina, K.; Canagaratna, M.; Zhang, Q.; Decarlo, P.; Salcedo, D.; Aiken, A. C.; Onasch, T. B.; Allan, J.; Russell, L. M.; Grivicke, R.; Lamb, B.; Alexander, M. L.; Worsnop, D. R.; Jimenez, J.

    2008-12-01

    Aerosol mass spectrometric measurements yield spectra of ambient aerosols that are a mix of various primary and secondary sources. Organic aerosol (OA) datasets acquired using Aerodyne aerosol mass spectrometers (Q-AMS, C-ToF-AMS, and HR-ToF-AMS) deployed in 2002, 2003, and 2006 in the Mexico City Metropolitan Area (MCMA) at multiple ground locations and from aircraft flights are analyzed with Positive Matrix Factorization to deconvolve information about important sources and processes for organic aerosols. Several components are identified in each dataset. Most datasets resolve contributions from: reduced (oxidative state) hydrocarbon-like OA (HOA), which correlates well with primary combustion tracers such as CO, NOx, and BC; biomass burning OA (BBOA), which correlates with regional fire counts, potassium, levoglucosan, acetonitrile, and HCN; highly-oxidized OA (OOA-I) which shows more regional behavior; and less oxidized OA (OOA-II) which correlates with semivolatile inorganic species such as ammonium nitrate and gas-phase secondary species such as Ox (NO2 + O3) and glyoxal. These correlations are consistent across most datasets when run separately in PMF. Factor spectra are also compared to reference spectra, and ratios of factor concentrations to relevant tracers (e.g., HOA/CO, OOA/Ox) are presented. Factor spectra, time series, diurnal cycles, and ratios are compared at sampling locations across the MCMA and in different years in order to understand the evolution of OA across the airshed. The effect of running multiple datasets within a single PMF model (e.g., simultaneous measurements made at two locations in Mexico City), and the stability of PMF solutions will be described.

  18. NEW DEVELOPMENTS IN EMISSION PROJECTION METHODOLOGIES

    EPA Science Inventory

    The paper describes the scope of an EPA research program to develop the Economic Growth Analysis System (EGAS) that will be operated to provide economic inputs for EPA's Regional Oxidant Model (ROM) and Urban Airshed Model (UAM). The Clean Air Act requires states to reduce ozone ...

  19. Photochemical smog modeling for assessment of potential impacts of different management strategies on air quality of the Bangkok Metropolitan Region, Thailand.

    PubMed

    Oanh, Nguyen Thi Kim; Zhang, Baoning

    2004-10-01

    A photochemical smog model system, the Variable-Grid Urban Airshed Model/Systems Applications International Mesoscale Model (UAM-V/SAIMM), was used to investigate photochemical pollution in the Bangkok Metropolitan Region (BMR). The model system was first applied to simulate a historical photochemical smog episode of two days (January 13-14, 1997) using the 1997 anthropogenic emission database available at the Pollution Control Department and an estimated biogenic emission. The output 1-hr ozone (O3) for BMR, however, did not meet the U.S. Environmental Protection Agency suggested performance criteria. The simulated minimum and maximum O3 values in the domain were much higher than the observations. Multiple model runs with different precursor emission reduction scenarios showed that the best model performance with the simulated 1-hr O3 meeting all the criteria was obtained when the volatile organic compound (VOC) and oxides of nitrogen (NOx) emission from mobile source reduced by 50% and carbon monoxide by 20% from the original database. Various combinations of anthropogenic and biogenic emissions in Bangkok and surrounding provinces were simulated to assess the contribution of different sources to O3 pollution in the city. O3 formation in Bangkok was found to be more VOC-sensitive than NOx-sensitive. To attain the Thailand ambient air quality standard for 1-hr O3 of 100 ppb, VOC emission in BMR should be reduced by 50-60%. Management strategies considered in the scenario study consist of Stage I, Stage II vapor control, replacement of two-stroke by four-stroke motorcycles, 100% compressed natural gas bus, 100% natural gas-fired power plants, and replacement of methyltertiarybutylether by ethanol as an additive for gasoline. PMID:15540584

  20. CONTINUED RESEARCH IN MESOSCALE AIR POLLUTION SIMULATION MODELING. VOLUME 5. REFINEMENTS IN NUMERICAL ANALYSIS, TRANSPORT, CHEMISTRY, AND POLLUTANT REMOVAL

    EPA Science Inventory

    Two numerical integration methods identified as having features that provided significant improvements over the technique originally embedded in the Airshed Model have been evaluated. Of particular concern was the treatment of horizontal transport. In the evaluation of the scheme...

  1. Diurnal and elevational variations in ozone and aerosol concentrations in New Hampshire`s Class-I Airsheds

    SciTech Connect

    Hill, L.B.; Allen, G.A.

    1994-12-31

    Ozone and fine mass aerosol concentrations on New Hampshire`s Mount Washington, situated adjacent to both the Presidential/Dry River and Great Gulf Wilderness Class-I Airsheds, exhibit distinct diurnal and elevational patterns. These patterns are attributed to regional pollutant transport dynamics, nocturnal atmospheric stratification, mountain meteorological phenomena and scavenging. A well-defined planetary boundary layer (PBL) forms at about 1 km elevation at night as demonstrated by nocturnal ozone monitoring along the Mount Washington Auto Road. The PBL provides an effective elevational barrier at night, isolating the valleys from the regionally transported air pollutants present above the mixing layer. During the daytime, the PBL breaks up due to convective processes and katabatic winds resulting from solar heating in the valley. This process creates a diurnal mixing cycle with ozone maxima recorded near mid-day in the adjacent valley. In contrast, fine mass concentrations are higher at the valley site, attributed to local source inputs, and the lack of strong nocturnal scavenging processes, compared with ozone. How aerosol concentrations are related to the PBL and how they are affected by diurnal mixing remains unclear largely due to current sampling methods. Exposure to ozone is generally greater above the treeline in the two airsheds.

  2. Quantification of Wood Smoke Markers in Fine Atmospheric PM in the New York City Airshed

    NASA Astrophysics Data System (ADS)

    Hawley, H. A.; Mazurek, M. A.; Li, M.

    2007-12-01

    Seasonal emissions from residential wood combustion, natural wildfires, agricultural burning and solid waste combustion are considered to be major sources of fine particles to the NYC metropolitan airshed. Wood smoke produced from the combustion of cellulosic material consists of polar organic compounds which are highly water-soluble. As alternative forms of energy production including biofuels for residential heating are developed and become more widely used, a key science question is how much of the carbonaceous PM2.5 currently is from wood smoke in urbanized areas and to what extent is this influencing atmospheric chemical properties. This project focuses on the quantitation of polar organic compounds extracted from fine particle samples (PM2.5) collected as part of the Speciation of Organics for Apportionment of PM-2.5 in the New York City Area (SOAP). The SOAP network operated from May 2002 to May 2003 at four sites: Queens, NYC (high density urban residential); Elizabeth, NJ (adjacent to the NJ Turnpike); Westport, CT (downwind NYC residential); and a regional background site in Chester, NJ (upwind NYC). A quantitative extraction and gas chromatographic/mass spectrometric (GC/MS) chemical analysis procedure was developed and evaluated. Trimethylsilyl (TMS) derivatives were prepared prior to GC/MS analysis and 5-point calibrations and multiple replicates were evaluated to ensure method precision. Levoglucosan was used as the primary marker for cellulose combustion; however a suite of monosaccharides and disaccharides and dehydroabietic acid, a marker indicative of soft wood combustion, also were quantified. Levoglucosan was found during each season at all four sampling locations with ambient mass concentrations ranging from 2.36 ng/m3 to 189 ng/m3. These values represent an estimated low of 0.73 percent to a high of 69 percent of organic carbon in the fine PM from wood smoke. The lowest levoglucosan concentrations were present consistently at the Chester, NJ

  3. Survey of volatile organic compounds associated with automotive emissions in the urban airshed of São Paulo, Brazil

    NASA Astrophysics Data System (ADS)

    Colón, Maribel; Pleil, Joachim D.; Hartlage, Thomas A.; Lucia Guardani, M.; Helena Martins, M.

    The Metropolitan Region of São Paulo (MRSP), Brazil, is one of the largest metropolitan areas in the world (population 17 million, approx.) and relies heavily on alcohol-based fuels for automobiles. It is estimated that about 40% of the total volume of fuel is ethanol with some vehicles using pure ethanol and others a gasoline/ethanol blend. As such, São Paulo is an excellent example of an oxygenates-dominated airshed of mobile sources and is most likely indicative of the future in heavily populated areas in the US such as Los Angeles where "oxy-fuels" are becoming an important replacement for the conventional pure petroleum-based fuels. In this work, we surveyed the ambient air to identify and quantify the organic compounds associated with the evaporative and exhaust emissions of these fuels and to begin to understand the potential for human exposure. Because this was an initial test without detailed prior knowledge of the airshed of the area, we applied two different air sampling methods for various time periods to assess the ambient concentrations of a variety of polar and nonpolar volatile organic compounds (VOCs). For quality assurance (QA), we collected all the samples in duplicate (whole-air samples in Summa canisters and adsorbent-based samples on Perkin-Elmer Air Toxics tubes) at various flow rates to test performance. All samples were collected over identical time frames, typically for 1-, 2-, and 4-h periods per day at six different locations over a period of 1 week. Overall São Paulo results demonstrate that mean concentrations of single-ring aromatics are 2-3 times higher, volatile aldehydes are 5-10 times higher, and simple alcohols 10-100 times higher as compared to results of a recent study performed by EPA in the Los Angeles basin. C 4-C 11n-alkanes were only slightly elevated in São Paulo.

  4. Using Sediment Records to Reconstruct Historical Inputs Combustion-Derived Contaminants to Urban Airsheds/Watersheds: A Case Study From the Puget Sound

    NASA Astrophysics Data System (ADS)

    Louchouarn, P. P.; Kuo, L.; Brandenberger, J.; Marcantonio, F.; Wade, T. L.; Crecelius, E.; Gobeil, C.

    2008-12-01

    Urban centers are major sources of combustion-derived particulate matter (e.g. black carbon (BC), polycyclic aromatic hydrocarbons (PAH), anhydrosugars) and volatile organic compounds to the atmosphere. Evidence is mounting that atmospheric emissions from combustion sources remain major contributors to air pollution of urban systems. For example, recent historical reconstructions of depositional fluxes for pyrogenic PAHs close to urban systems have shown an unanticipated reversal in the trends of decreasing emissions initiated during the mid-20th Century. Here we compare a series of historical reconstructions of combustion emission in urban and rural airsheds over the last century using sedimentary records. A complex suite of combustion proxies (BC, PAHs, anhydrosugars, stable lead concentrations and isotope signatures) assisted in elucidating major changes in the type of atmospheric aerosols originating from specific processes (i.e. biomass burning vs. fossil fuel combustion) or fuel sources (wood vs. coal vs. oil). In all studied locations, coal continues to be a major source of combustion-derived aerosols since the early 20th Century. Recently, however, oil and biomass combustion have become substantial additional sources of atmospheric contamination. In the Puget Sound basin, along the Pacific Northwest region of the U.S., rural locations not impacted by direct point sources of contamination have helped assess the influence of catalytic converters on concentrations of oil-derived PAH and lead inputs since the early 1970s. Although atmospheric deposition of lead has continued to drop since the introduction of catalytic converters and ban on leaded gasoline, PAH inputs have "rebounded" in the last decade. A similar steady and recent rise in PAH accumulations in urban systems has been ascribed to continued urban sprawl and increasing vehicular traffic. In the U.S., automotive emissions, whether from gasoline or diesel combustion, are becoming a major source of

  5. Aryl hydrocarbon receptor-mediated activity of particulate organic matter from the Paso del Norte airshed along the U.S.-Mexico border.

    PubMed Central

    Arrieta, Daniel E; Ontiveros, Cynthia C; Li, Wen-Whai; Garcia, Jose H; Denison, Michael S; McDonald, Jacob D; Burchiel, Scott W; Washburn, Barbara Shayne

    2003-01-01

    In this study, we determined the biologic activity of dichloromethane-extracted particulate matter < 10 micro m in aerodynamic diameter (PM10) obtained from filters at three sites in the Paso del Norte airshed, which includes El Paso, Texas, USA; Juarez, Chihuahua, Mexico, and Sunland Park, New Mexico, USA. The extracts were rich in polycyclic aromatic hydrocarbons (PAHs) and had significant biologic activity, measured using two in vitro assay systems: ethoxyresorufin-(O-deethylase (EROD) induction and the aryl hydrocarbon-receptor luciferase reporter system. In most cases, both EROD (5.25 pmol/min/mg protein) and luciferase activities (994 relative light units/mg) were highest in extracts from the Advance site located in an industrial neighborhood in Juarez. These values represented 58% and 55%, respectively, of induction associated with 1 micro M ss-naphthoflavone exposures. In contrast, little activity was observed at the Northeast Clinic site in El Paso, the reference site. In most cases, luciferase and EROD activity from extracts collected from the Tillman Health Center site, situated in downtown El Paso, fell between those observed at the other two sites. Overall, a statistically significant correlation existed between PM10 and EROD and luciferase activities. Chemical analysis of extracts collected from the Advance site demonstrated that concentrations of most PAHs were higher than those reported in most other metropolitan areas in the United States. Calculations made with these data suggest a cancer risk of 5-12 cases per 100,000 people. This risk estimate, as well as comparisons with the work of other investigators, raises concern regarding the potential for adverse health effects to the residents of this airshed. Further work is needed to understand the sources, exposure, and effects of PM10 and particulate organic material in the Paso del Norte airshed. PMID:12896850

  6. Final Report of the Grant: ''Vertical Transport and Mixing in Complex Terrain Airsheds''

    SciTech Connect

    Fernando, Joseph Harindra; Anderson, James; Boyer, Don; Berman, Neil

    2004-12-29

    Stable stratification associated with nocturnal thermal circulation in areas of complex terrain leads to interesting and important phenomena that govern local meteorology and contaminant dispersion. Given that most urban areas are in complex topography, understanding and prediction of such phenomena are of immediate practical importance. This project dealt with theoretical, laboratory, numerical and field experimental studies aimed at understanding stratified flow and turbulence phenomena in urban areas, with particular emphasis on flow, turbulence and contaminant transport and diffusion in such flows. A myriad of new results were obtained and some of these results were used to improve the predictive capabilities of the models.

  7. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  8. Associations between immune function and air pollution among postmenopausal women living in the Puget Sound airshed

    NASA Astrophysics Data System (ADS)

    Williams, Lori A.

    Air pollution is associated with adverse health outcomes, and changes in the immune system may be intermediate steps between exposure and a clinically relevant adverse health outcome. We analyzed the associations between three different types of measures of air pollution exposure and five biomarkers of immune function among 115 overweight and obese postmenopausal women whose immunity was assessed as part of a year-long moderate exercise intervention trial. For air pollution metrics, we assessed: (1) residential proximity to major roads (freeways, major arterials and truck routes), (2) fine particulate matter(PM2.5) at the nearest monitor to the residence averaged over three time windows (3-days, 30-days and 60-days), and (3) nitrogen dioxide (NO2) modeled based on land use characteristics. Our immune biomarkers included three measures of inflammation---C-reactive protein, serum amyloid A and interleukin-6---and two measures of cellular immunity---natural killer cell cytotoxicity and T lymphocyte proliferation. We hypothesized that living near a major road, increased exposure to PM2.5 and increased exposure to NO2 would each be independently associated with increased inflammation and decreased immune function. We observed a 21% lower average natural killer cell cytotoxicity among women living within 150 meters of a major arterial road compared to other women. For PM2.5 , we observed changes in 3 of 4 indicators of lymphocyte proliferation stimulated by anti-CD3---an antibody to the T cell receptor associated with increases in 3-day averaged PM2.5. For 30-day averaged PM 2.5 and 60-day averaged PM2.5 we did not observe any statistically significant associations. We observed an increase in lymphocyte proliferation index stimulated by the plant protein phytohemagglutinin (PHA) at 1 of 2 PHA concentrations in association with modeled NO2. For the three inflammatory markers, we observed no notable associations with any of our measures of air pollution. If confirmed, our

  9. INVESTIGATION OF PHOTOCHEMICAL MODELING OF POINT SOURCE POLLUTANTS WITH EULERIAN GRID AND LAGRANGIAN PLUME APPROACHES

    EPA Science Inventory

    In this paper, results of Eulerian grid and Lagrangian photochemical model simulations of emissions from a major elevated point source are presented. eries of simulations with grid sizes varying from 30 km to 2 km were performed with the Urban Airshed Model, a photochemical grid ...

  10. RECEPTOR MODEL COMPARISONS AND WIND DIRECTION ANALYSES OF VOLATILE ORGANIC COMPOUNDS AND SUBMICROMETER PARTICLES IN AN ARID, BINATIONAL, URBAN AIRSHED

    EPA Science Inventory

    The relationship between continuous measurements of volatile organic compounds sources and particle number was evaluated at a Photochemical Assessment Monitoring Station Network (PAMS) site located near the U.S.-Mexico Border in central El Paso, TX. Sources of volatile organic...

  11. DEVELOPMENT AND TESTING OF THE CBM-IV (CARBON-BOND MECHANISM) FOR URBAN AND REGIONAL MODELING

    EPA Science Inventory

    The development and testing of an updated Carbon Bond Mechanism (CBM) is described. The new mechanism, called CBM-IV, was designed for use in EPA's OZIPM-EKMA model. The mechanism, however, is also suitable for use in large air quality simulation models such as the Urban Airshed ...

  12. Measured and predicted airshed concentrations of methyl bromide in an agricultural valley and applications to exposure assessment

    NASA Astrophysics Data System (ADS)

    Honaganahalli, Puttanna S.; Seiber, James N.

    A field study was conducted in September 1995 to measure the ambient atmospheric concentrations of methyl bromide (MeBr) in the Salinas Valley, California. Air concentrations of MeBr were measured at 11 sites located on the adjacent mountains, valley floor and at the Pacific Ocean coast over a 4-d period. The concentrations ranged up to 8.98 μg m -3. Industrial Source Complex Short Term 3 (ISCST3) and CALPUFF dispersion model simulations were performed with several fumigated fields serving as sources, using two estimates of source strengths from published flux values. CALPUFF was driven by 3D meteorology from CALMET. With the lower of the two estimates, the ISCST3 model underpredicted concentrations for 76% of data and averaged 66% of measured, and the CALPUFF model also underpredicted concentrations for 67% of observations and averaged 84% of measured. With the higher of the two estimates the ISCST3 overpredicted by a factor of two for 67% of data, and CALPUFF overpredicted concentrations by a factor of 1.6 for over 50% of data. Between the model predicted and measured concentrations, the coefficient of determination, R 2, was ≈0.7 for both source strengths with ISCST3 model. The R 2 with CALPUFF model was 0.55 and 0.82 with source strength estimated from two prior flux studies. The margin of exposure (MOE) for the population of the city of Salinas was calculated based on the measured ambient concentrations and compared with the current benchmark used by US-EPA and California Department of Pesticide Regulation for acceptable human health risk. Based on the models predicted worst-case exposure concentration, the MOE for acute effects was approximately 10,000. For chronic effects it was approximately 100, indicating a need for attention to exposure to MeBr in areas of intense methyl bromide use.

  13. Refined estimates of biogenic hydrocarbon emissions for Atlanta. Interim report, January 1992-November 1993

    SciTech Connect

    Pierce, T.E.; Coventry, D.H.; Van Meter, A.R.; Geron, C.D.

    1993-11-01

    Biogenic emissions of volatile organic compounds (VOCs) reportedly play an important role in ozone non-attainment for Atlanta. To better understand this problem, the Southern Oxidant Study participated in an intensive field experiment around Atlanta during the summer of 1992. This paper compares estimates from three different inventories. The first inventory uses the existing Biogenic Emissions Inventory System (BEIS) in the Urban Airshed Model (UAM). UAM-BEIS relies on county-aggregated land use patterns and emission factors dating back to the 1970's. A second inventory incorporates recent (circa 1990) satellite data. Information from the U.S. Forest Service (USFS) is used to increase the coverage of trees in urban areas from 20% to 30%. The third inventory uses USFS forest inventory statistics to compute leaf biomass and tree species composition for about 1 acre forest survey plots, which are extrapolated to about 2000 hectares forest areas as delineated by aerial photography.

  14. Trends in emissions and concentrations of air pollutants in the lower troposphere in the Baltimore/Washington airshed from 1997 to 2011

    NASA Astrophysics Data System (ADS)

    He, H.; Stehr, J. W.; Hains, J. C.; Krask, D. J.; Doddridge, B. G.; Vinnikov, K. Y.; Canty, T. P.; Hosley, K. M.; Salawitch, R. J.; Worden, H. M.; Dickerson, R. R.

    2013-08-01

    Trends in the composition of the lower atmosphere (0-1500 m altitude) and surface air quality over the Baltimore/Washington area and surrounding states were investigated for the period from 1997 to 2011. We examined emissions of ozone precursors from monitors and inventories as well as ambient ground-level and aircraft measurements to characterize trends in air pollution. The US EPA Continuous Emissions Monitoring System (CEMS) program reported substantial decreases in emission of summertime nitrogen oxides (NOx) from power plants, up to ∼80% in the mid-Atlantic States. These large reductions in emission of NOx are reflected in a sharp decrease of ground-level concentrations of NOx starting around 2003. The decreasing trend of tropospheric column CO observed by aircraft is ∼0.8 Dobson unit (DU) per year, corresponding to ∼35 ppbv yr-1 in the lower troposphere (the surface to 1500 m above ground level). Satellite observations of long-term, near-surface CO show a ∼40% decrease over western Maryland between 2000 and 2011; the same magnitude is indicated by aircraft measurements above these regions upwind of the Baltimore/Washington airshed. With decreasing emissions of ozone precursors, the ground-level ozone in the Baltimore/Washington area shows a 0.6 ppbv yr-1 decrease in the past 15 yr. Since photochemical production of ozone is substantially influenced by ambient temperature, we introduce the climate penalty factor (CPF) into the trend analysis of long-term aircraft measurements. After compensating for inter-annual variations in temperature, historical aircraft measurements indicate that the daily net production of tropospheric ozone over the Baltimore/Washington area decreased from ∼20 ppbv day-1 in the late 1990s to ∼7 ppbv day-1 in the early 2010s during ozone season. A decrease in the long-term column ozone is observed as ∼0.2 DU yr-1 in the lowest 1500 m, corresponding to an improvement of ∼1.3 ppbv yr-1. Our aircraft

  15. Cytotoxic responses and potential respiratory health effects of carbon and carbonaceous nanoparticulates in the Paso del Norte airshed environment.

    PubMed

    Soto, K F; Murr, L E; Garza, K M

    2008-03-01

    We have utilized a range of manufactured or commercial nanoparticulate materials, including surrogate carbon nano-PM along with combustion-generated carbonaceous (soot) nano-PM characteristic of environmental nano- PM (both indoor and outdoor) to investigate and compare their cytotoxic response in vitro with an immortalized human epithelial (lung model) cell line (A549). These have included nano-Ag, Al2O3, TiO2, Fe2O3, ZrO2, Si3N4, chrysotile asbestos, BC, 2 types of MWCNT-aggregate PM (MWCNT-R and MWCNT-N), and high-volume glass fiber collected soots: candle, wood, diesel (truck), tire, and 3-types of natural gas kitchen burner-generated soots: yellow (fuel-rich) flame, low-flow blue flame, and normal flow blue flame soot PM. These carbonaceous nano-PM species can be found in either the indoor and outdoor environments or microenvironments. Two-day and two-week in-vitro cultures of A549 showed cell death (or decreased cell viability) for all nanoparticulate materials, but especially significant for all but the TiO2 and candle, wood, and diesel PM. The natural gas kitchen burner combustion PM cell death response was characteristic of BC and MWCNT PM. There was no correlation with total PAH content of the soot PM. Cytokine release (IL-6, IL-8) was detected for the Ag, Fe2 O3, asbestos, BC and the MWCNT PM. Reactive oxygen species (ROS) production was also detected for Ag, Fe2 O3, ZrO2, asbestos, BC, and the MWCNT aggregate PM, as well as the natural gas kitchen burner combustion PM. TEM, FESEM, and optical microscopy examination of these nanomaterials illustrate the wide range in PM morphologies and crystallinities as well as cell morphologies. Taken together, these results illustrate proinflammatory and related respiratory health issues in relation to environmental nanoparticulates. PMID:18441401

  16. Cytotoxic Responses and Potential Respiratory Health Effects of Carbon and Carbonaceous Nanoparticulates in the Paso del Norte Airshed Environment

    PubMed Central

    Soto, K. F.; Murr, L. E.; Garza, K. M.

    2008-01-01

    We have utilized a range of manufactured or commercial nanoparticulate materials, including surrogate carbon nano-PM along with combustion-generated carbonaceous (soot) nano-PM characteristic of environmental nano-PM (both indoor and outdoor) to investigate and compare their cytotoxic response in vitro with an immortalized human epithelial (lung model) cell line (A549). These have included nano-Ag, Al2O3, TiO2, Fe2O3, ZrO2, Si3N4, chrysotile asbestos, BC, 2 types of MWCNT-aggregate PM (MWCNT-R and MWCNT-N), and high-volume glass fiber collected soots: candle, wood, diesel (truck), tire, and 3-types of natural gas kitchen burner-generated soots: yellow (fuel-rich) flame, low-flow blue flame, and normal flow blue flame soot PM. These carbonaceous nano-PM species can be found in either the indoor and outdoor environments or microenvironments. Two-day and two-week in-vitro cultures of A549 showed cell death (or decreased cell viability) for all nanoparticulate materials, but especially significant for all but the TiO2 and candle, wood, and diesel PM. The natural gas kitchen burner combustion PM cell death response was characteristic of BC and MWCNT PM. There was no correlation with total PAH content of the soot PM. Cytokine release (IL-6, IL-8) was detected for the Ag, Fe2 O3, asbestos, BC and the MWCNT PM. Reactive oxygen species (ROS) production was also detected for Ag, Fe2 O3, ZrO2, asbestos, BC, and the MWCNT aggregate PM, as well as the natural gas kitchen burner combustion PM. TEM, FESEM, and optical microscopy examination of these nanomaterials illustrate the wide range in PM morphologies and crystallinities as well as cell morphologies. Taken together, these results illustrate proinflammatory and related respiratory health issues in relation to environmental nanoparticulates. PMID:18441401

  17. PM10 modeling for the Thurston County redesignation request

    SciTech Connect

    Carr, E.L.; O`Connor, K.

    1997-12-31

    Present dispersion modeling procedures for assessing particulate matter for air quality planning purposes have historically relied upon Gaussian-based dispersion models such as RAM and ISC. These tools were useful when particulate matter violations were associated with point source emissions or violations of the annual standard. However, most PM{sub 10} exceedances today are the result of areawide emissions as well as point sources a more realistic treatment of dispersion processes is warranted. High 24-hour average PM{sub 10} concentrations throughout the western US frequently occur during fall and winter under low wind speed, stable conditions. These are the same conditions under which steady-state assumptions of Gaussian formulated models are invalid. Because the steady-state Gaussian based approach cannot simulate the accumulation of material from hour to hour large discrepancies occur between modeled and observed concentrations. A grid cell and time step modeling approach of the UAM model is more appropriate for episodic particulate modeling. The UAM model is well suited because it can numerically simulate the effects of emissions, advection (at wind speeds well below 1.0 ms{sup {minus}1}), carryover and diffusion and removal processes. As part of the Thurston County PM{sub 10} maintenance demonstration the UAM model was applied for both PM{sub 10} and PM{sub 2.5}. CMB analysis has shown that nearly all of the observed PM{sub 10} is the result of primary emissions, correspondingly the UAM model was applied without particulate chemistry. In this mode the model is computationally inexpensive and easily runs on a PC-platform.

  18. Modeling Visibility in the EL Paso del Norte Region

    NASA Astrophysics Data System (ADS)

    Fitzgerald, R. M.; Medina, R.; DuBois, D. W.; Novlan, D.

    2013-12-01

    Poor visibility is a subject of growing public concern throughout the U.S, and an active area of research. Its societal impacts in air quality, aviation transport and traffic are significant. Aerosols play a fundamental role in the attenuation of solar radiation, and also affect visibility. The scattering and extinction coefficients of aerosol particles in the Paso del Norte Region have been calculated using the T- matrix model in conjunction with a laser particle counter. Inter-comparison of the model's results of the scattering and absorption coefficients against the corresponding data from a Photoacustic Extinctiometer instrument (which measures in-situ absorption and scattering coefficients of aerosol particles) shows excellent agreement. In addition, the volume-weighted method is used to determine the composite index of refraction which is representative of the aerosols for the Paso del Norte Airshed to obtain information of the type of aerosol particles present in the Airshed. The Single Scattering Albedo has also been retrieved using our methodology to obtain further insight into the type of aerosols present on a given day. Finally, the Koschmieder equation has been used to calculate the visual range or visibility, and was correlated with the PM2.5 and PM10 particle concentration present in the Airshed. Our methodology will allow a better understanding of the size and type of aerosol particles that are most detrimental to the visibility for the Paso del Norte Region.

  19. Receptor model source attributions for Utah’s Salt Lake City airshed and the impacts of wintertime secondary ammonium nitrate and ammonium chloride aerosol.

    EPA Science Inventory

    Communities along Utah’s Wasatch Front are currently developing strategies to reduce daily average PM2.5 levels to below National Ambient Air Quality Standards during wintertime, persistent, multi-day stable atmospheric conditions or cold-air pools. Speciated PM2.5 data from the ...

  20. Adaptation of the South-West Wing of Collegium Chemicum of Adam Mickiewicz University in Poznań for Storage Facilities/ Adaptacja Południowo-Zachodniego Skrzydła Budynku Collegium Chemicum Uam W Poznaniu Na Cele Magazynowe

    NASA Astrophysics Data System (ADS)

    Ścigałło, Jacek

    2015-06-01

    The article refers to the problems of adaptation of Collegium Chemicum facilities belonging to Adam Mickiewicz Uniwersity in Poznań to its storage needs. The subject building is situated in Grunwaldzka Street in Poznań. In the introduction part, the building and its structural solutions are described. The results of the materials research and the measurements of the used reinforcement have been presented. The structure diagnostic analyses were performed basing on measurements and research. The analysis allowed the determination of the limit loads. The results of the performed analysis of the current state turned out to be unsatisfactory, not only in terms of the planned storage load but also in terms of the current load state, as was shown by the construction analysis. W pracy przedstawiono problemy związane z adaptacją budynku dydaktycznego Collegium Chemicum przy ul. Grunwaldzkiej w Poznaniu na cele magazynowe Biblioteki Głównej UAM. Na wstępie opisano badany budynek oraz scharakteryzowano zastosowane w nim rozwiązania konstrukcyjne. Przedstawiono wyniki wykonanych badań materiałowych oraz pomiarów inwentaryzacyjnych zastosowanego zbrojenia. Na podstawie wykonanych pomiarów i badań przeprowadzono analizę diagnostyczną konstrukcji, która pozwoliła na wyznaczenie dopuszczalnych wartości obciążeń powierzchni stropowych. Wyniki wykonanej analizy konstrukcji w stanie istniejącym okazały się dalece niezadowalające nie tylko z punktu widzenia planowanych, znacznych obciążeń magazynowych. Analiza wykazała bowiem, że konstrukcja jest już znacznie przeciążona w aktualnym stanie jej obciążenia

  1. Comparative Analysis of Single-Species and Polybacterial Wound Biofilms Using a Quantitative, In Vivo, Rabbit Ear Model

    PubMed Central

    Seth, Akhil K.; Geringer, Matthew R.; Hong, Seok J.; Leung, Kai P.; Galiano, Robert D.; Mustoe, Thomas A.

    2012-01-01

    Introduction The recent literature suggests that chronic wound biofilms often consist of multiple bacterial species. However, without appropriate in vivo, polybacterial biofilm models, our understanding of these complex infections remains limited. We evaluate and compare the effect of single- and mixed-species biofilm infections on host wound healing dynamics using a quantitative, in vivo, rabbit ear model. Methods Six-mm dermal punch wounds in New Zealand rabbit ears were inoculated with Staphylococcus aureus strain UAMS-1, Pseudomonas aeruginosa strain PAO1, or both, totaling 10∧6 colony-forming units/wound. Bacterial proliferation and maintenance in vivo were done using procedures from our previously published model. Wounds were harvested for histological measurement of wound healing, viable bacterial counts using selective media, or inflammatory cytokine (IL-1β, TNF-α) expression via quantitative reverse-transcription PCR. Biofilm structure was studied using scanning electron microscopy (SEM). For comparison, biofilm deficient mutant UAMS-929 replaced strain UAMS-1 in some mixed-species infections. Results Bacterial counts verified the presence of both strains UAMS-1 and PAO1 in polybacterial wounds. Over time, strain PAO1 became predominant (p<0.001). SEM showed colocalization of both species within an extracellular matrix at multiple time-points. Compared to each monospecies infection, polybacterial biofilms impaired all wound healing parameters (p<0.01), and increased expression of IL-1β and TNF-α (p<0.05). In contrast, mixed-species infections using biofilm-deficient mutant UAMS-929 instead of wild-type strain UAMS-1 showed less wound impairment (p<0.01) with decreased host cytokine expression (p<0.01), despite a bacterial burden and distribution comparable to that of mixed-wild-type wounds. Conclusions This study reveals that mixed-species biofilms have a greater impact on wound healing dynamics than their monospecies counterparts. The increased

  2. CFD Modeling For Urban Air Quality Studies

    SciTech Connect

    Lee, R L; Lucas, L J; Humphreys, T D; Chan, S T

    2003-10-27

    The computational fluid dynamics (CFD) approach has been increasingly applied to many atmospheric applications, including flow over buildings and complex terrain, and dispersion of hazardous releases. However there has been much less activity on the coupling of CFD with atmospheric chemistry. Most of the atmospheric chemistry applications have been focused on the modeling of chemistry on larger spatial scales, such as global or urban airshed scale. However, the increased attentions to terrorism threats have stimulated the need of much more detailed simulations involving chemical releases within urban areas. This motivated us to develop a new CFD/coupled-chemistry capability as part of our modeling effort.

  3. Particulate matter formation in the San Joaquin Valley: Modeling of a winter episode

    SciTech Connect

    Kaduwela, A.P.; Hughes, V.M.; Hackney, R.J.; Jackson, B.J.; Magliano, K.L.; Ranzieri, A.J.

    1998-12-31

    The gaseous and particulate matter concentrations in the San Joaquin Valley simulated using UAM-AERO for the January 4--6, 1996 winter episode are presented and compared with the measurements made during this period. The emphasis here is on the formation of secondary aerosols. The sensitivity of modeled results to input data such as initial/boundary conditions, emissions, and meteorological conditions is also described.

  4. The future ``Golden Age`` of predictive models for surface water quality and ecosystem management

    SciTech Connect

    Thomann, R.V.

    1998-02-01

    This paper is based on a Simon W. Freese lecture given at the ASCE North American Water and Environment Congress `96, Anaheim, California, on June 24, 1996. The role of water quality modeling in providing input to the decision-making process through understanding, dialogue, and consensus is discussed. The evolution of models is seen in three stages. During the first stage (1925 to about 1980), all sources (point, nonpoint, and sediment) were external to the model, but only point sources were directly linked to the originating input During the second stage (about 1980 to 1990), sediment models were coupled to the water column and hydrodynamic and watershed models were linked. A link was then established from watershed models to the input of the watershed. During the third stage (currently under way), airshed models are being incorporated with expansion to include other aspects of aquatic ecosystem. The Chesapeake Bay is used as an illustration. Issues of model credibility and confirmation are discussed; ultimately, the scientific and engineering community decides on the suitability of a modeling framework. The growth in model size over the history of modeling has been significant and parallels the increase in computing power. Future modeling challenges lie ahead in the areas of watershed models, airshed-watershed-estuarine-coastal ocean models, and whole ecosystem, living resources models. The success of water quality models will not necessarily be due to bigness and complexity but rather to increases in understanding, which can contribute to building consensus in water quality management decision-making.

  5. Simulation of smoke plumes from agricultural burns: application to the San Luis/Rio Colorado airshed along the U.S./Mexico border.

    PubMed

    Choi, Yu-Jin; Fernando, H J S

    2007-12-15

    Vegetation fires emit a number of air pollutants, thus impacting air quality at local, regional and global scales. One such pollutant is the particulate matter (PM) that is known to trigger adverse health effects. In this study, the CALPUFF/CALMET/MM5 modeling system is employed to simulate PM(10) dispersion (PM with aerodynamic diameter less than 10 microm) from agricultural fires in the Yuma/San Luis area along the U.S./Mexico border, with the aim of investigating local and regional air quality impacts of fires. To the extent possible the data collected from and observations made in the study area were employed to infer inputs to the modeling system, but insufficient information available on burning practices and input parameters, such as the duration of fire, PM(10) emission rate and plume rise, necessitated relying on some previously published research as well as the Fire Emission Production Simulator (FEPS) model to provide necessary inputs. Under the simulated conditions the fire plumes did not disperse much, and thus mostly affected the area near the sources. The PM impact of fires on populated (receptor) areas in Yuma/San Luis was less than 15 microg/m(3), calculated on the basis of EPA-recommended 24-hr averaged PM(10). If the formation of secondary particles is considered, the impacts could have been greater. In order to conduct more realistic fire plume simulations, it is imperative to have accurate fire-activity records such as the firing technique applied, fuel condition, time of burning as well as some model updates. In all, this paper presents a methodology for calculating agricultural-burns introduced PM, while identifying critical improvements that need to be made in future work. PMID:17889257

  6. Testing of a PC-based regional air quality modeling system

    SciTech Connect

    Tran, K.T.; Cuq, F.

    1998-12-31

    Current regional modeling practice requires the use of a mesoscale model such as CSUMM or MM5 to generate the windfields and other meteorological inputs, and a photochemical grid model such as UAM or SAQM-AERO to predict ozone and PM concentrations. These models require extensive resources and are frequently operational on supercomputers and Unix workstations. Costs for running them on these computers are prohibitive, especially the analysis of different control strategies that may require dozens of model simulations. This paper describes the development and adaptation of regional models for running on inexpensive Pentium PCs. Benchmark tests using actual episodes are also compared against Cray supercomputers and Unix workstations.

  7. Mesoscale modeling of combined aerosol and photo-oxidant processes in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Lazaridis, M.; Spyridaki, A.; Solberg, S.; Smolík, J.; Zdímal, V.; Eleftheriadis, K.; Aleksanropoulou, V.; Hov, O.; Georgopoulos, P. G.

    2005-03-01

    Particulate matter and photo-oxidant processes in the Eastern Mediterranean have been studied using the UAM-AERO mesoscale air quality model in conjunction with the NILU-CTM regional model. Meteorological data were obtained from the RAMS prognostic meteorological model. The modeling domain includes the eastern Mediterranean area between the Greek mainland and the island of Crete. The modeling system is applied to study the atmospheric processes in three periods, i.e. 13-16 July 2000, 26-30 July 2000 and 7-14 January 2001. The spatial and temporal distributions of both gaseous and particulate matter pollutants have been extensively studied together with the identification of major emission sources in the area. The modeling results were compared with field data obtained in the same period. The objective of the current modeling work was mainly to apply the UAM-AERO mesoscale model in the eastern Mediterranean in order to assess the performed field campaigns and determine that the applied mesoscale model is fit for this purpose. Comparison of the modeling results with measured data was performed for a number of gaseous and aerosol species. The UAM-AERO model underestimates the PM10 measured concentrations during summer and winter campaigns. Discrepancies between modeled and measured data are attributed to unresolved particulate matter emissions. Particulate matter in the area is mainly composed by sulphate, sea salt and crustal materials, and with significant amounts of nitrate, ammonium and organics. During winter the particulate matter and oxidant concentrations were lower than the summer values.

  8. Biogenic emissions modeling for Southeastern Texas

    SciTech Connect

    Estes, M.; Jacob, D.; Jarvie, J.

    1996-12-31

    The Texas Natural Resource Conservation Commission (TNRCC) modeling staff performed biogenic hydrocarbon emissions modeling in support of gridded photochemical modeling for ozone episodes in 1992 and 1993 for the Coastal Oxidant Assessment for Southeast Texas (COAST) modeling domain. This paper summarizes the results of the biogenic emissions modeling and compares preliminary photochemical modeling results to ambient air monitoring data collected during the 1993 COAST study. Biogenic emissions were estimated using BIOME, a gridded biogenic emissions model that uses region-specific land use and biomass density data, and plant species-specific emission factor data. Ambient air monitoring data were obtained by continuous automated gas chromatography at two sites, one-hour canister samples at 5 sites, and 24-hour canister samples at 13 other sites. The concentrations of Carbon Bond-IV species (as determined from urban airshed modeling) were compared to measured hydrocarbon concentrations. In this paper, we examined diurnal and seasonal variations, as well as spatial variations.

  9. Final Technical Report: Development of the DUSTRAN GIS-Based Complex Terrain Model for Atmospheric Dust Dispersion

    SciTech Connect

    Allwine, K Jerry; Rutz, Frederick C.; Shaw, William J.; Rishel, Jeremy P.; Fritz, Brad G.; Chapman, Elaine G.; Hoopes, Bonnie L.; Seiple, Timothy E.

    2007-05-01

    Activities at U.S. Department of Defense (DoD) training and testing ranges can be sources of dust in local and regional airsheds governed by air-quality regulations. The U.S. Department of Energy’s Pacific Northwest National Laboratory just completed a multi-year project to develop a fully tested and documented atmospheric dispersion modeling system (DUST TRANsport or DUSTRAN) to assist the DoD in addressing particulate air-quality issues at military training and testing ranges.

  10. Mesoscale modeling of combined aerosol and photo-oxidant processes in the eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Lazaridis, M.; Spyridaki, A.; Solberg, S.; Smolík, J.; Ždímal, V.; Eleftheriadis, K.; Aleksandropoulou, V.; Hov, O.; Georgopoulos, P. G.

    2004-09-01

    Particulate matter and photo-oxidant processes in the Eastern Mediterranean have been studied using the UAM-AERO mesoscale air quality model in conjunction with the NILU-CTM regional model. Meteorological data were obtained from the RAMS prognostic meteorological model. The modeling domain includes the eastern Mediterranean area between the Greek mainland and the island of Crete. The modeling system is applied to study the atmospheric processes in three periods, i.e. 13-16 July 2000, 26-30 July 2000 and 7-14 January 2001. The spatial and temporal distributions of both gaseous and particulate matter pollutants have been extensively studied together with the identification of major emission sources in the area. The modeling results were compared with field data obtained in the same period. Comparison of the modeling results with measured data was performed for a number of gaseous and aerosol species. The UAM-AERO model underestimates the PM10 measured concentrations during summer but better comparison has been obtained for the winter data.

  11. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  12. Sensitivity of Urban Photochemical Models to Upper Wind Measurements

    NASA Astrophysics Data System (ADS)

    Al-Wali, Khalid Ibrahim

    1995-01-01

    The 1992 Atlanta Field Intensive of the Southern Oxidants Research Program on Ozone Non-Attainment (SORP -ONA) provided a unique data set of urban meteorological measurements. The data were used to investigate the sensitivity of photochemical model results to the spatial and temporal resolution of upper-air meteorological measurements. Root Mean Square Differences (RMSD) and average absolute deviations for winds and model calculated rm NO_ {y} and rm O_3 values were computed for a variety of measurement strategies which differed in the spatial and temporal resolution of the upper-air data. A Lagrangian particle dispersion model (LPDM) was used to study the movement of the plumes from the Atlanta urban core and an elevated power plant located northwest of Atlanta. Scenarios with different sets of wind measurements were performed. The results show that placement of upper-air measurements at or near the center of emissions density should be highest priority of field measurements campaigns. A prognostic model, the Regional Atmospheric Modeling System (RAMS), was used to generate the three dimensional winds to run the UAM. Results from the UAM runs also support the conclusion that upper-air measurements at or near the center of emissions density should be highest priority of field measurements campaigns. A number of Large Eddy Simulations (LES) were performed for urban forested and unforested areas with different soil moisture to examine the extent of the thermals that represent the height of the mixed layer. Results from the LES runs show that simulations with dry soil produce higher sensible heat flux and stronger thermals with a deeper mixed layer than simulations with moister soil. LES runs also showed that sensible heat fluxes, vertical motion, and mixing is weaker over forested areas compared to over urban areas. Three methods were used to calculate the height of the convective mixed layer. Differences of up to 200 m were found between the three methods.

  13. Comparison of Homogeneous and Heterogeneous CFD Fuel Models for Phase I of the IAEA CRP on HTR Uncertainties Benchmark

    SciTech Connect

    Gerhard Strydom; Su-Jong Yoon

    2014-04-01

    Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phases on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.

  14. MICROBIAL AEROSOLS: ESTIMATED CONTRIBUTION OF COMBINE HARVESTING TO AN AIRSHED

    EPA Science Inventory

    From plate counts of the airborne microorganisms in the downwind dust plume of operating grass-seed combines, the mean source concentrations were calculated to be 6.4 x 10 to the 8th power and 4.7 x 10 to the 8th power/cu m, respectively, potentially accounting for at least 41.9%...

  15. The BOND project: Biogenic aerosols and air quality in Athens and Marseille greater areas

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, R. E. P.; Tagaris, E.; Pilinis, C.; Andronopoulos, S.; Sfetsos, A.; Bartzis, J. G.

    2004-03-01

    The role of Secondary Biogenic Organic Aerosol in aerosol budget is examined using the Atmospheric Dispersion of Pollutants over Complex Terrain-Urban Airshed Model-Aerosols (ADREA-I/UAM-AERO) modeling system in two representative Mediterranean areas. The areas have been selected, because of their elevated biogenic emission levels and the sufficient degree of meteorological and land use diversity characterizing the locations. Comparison of the model results with and without biogenic emissions reveals the significant role biogenic emissions play in modulating ozone and aerosol concentrations. Biogenic emissions are predicted to affect the concentrations of organic aerosol constituents through the reactions of terpenes with O3, OH and NO3. The ozonolysis of terpenes is predicted to cause an increase in OH radical concentrations that ranges from 10% to 78% for Athens, and from 20% to 95% for Marseilles, depending on the location, compared to the predictions without biogenic emissions. The reactions of this extra hydroxyl radical with SO2 and NOx have as final products increased concentrations of sulfates and nitrates in the particulate phase. As a result, biogenic emissions are predicted to affect the concentrations not only of organic aerosols, but those of inorganic aerosols as well. Thus biogenic emissions should be taken into consideration when models for the prediction and enforcement of abatement strategies of atmospheric pollution are applied.

  16. Development of aroCACM/MPMPO 1.0: a model to simulate secondary organic aerosol from aromatic precursors in regional models

    NASA Astrophysics Data System (ADS)

    Dawson, Matthew L.; Xu, Jialu; Griffin, Robert J.; Dabdub, Donald

    2016-06-01

    The atmospheric oxidation of aromatic compounds is an important source of secondary organic aerosol (SOA) in urban areas. The oxidation of aromatics depends strongly on the levels of nitrogen oxides (NOx). However, details of the mechanisms by which oxidation occurs have only recently been elucidated. Xu et al. (2015) developed an updated version of the gas-phase Caltech Atmospheric Chemistry Mechanism (CACM) designed to simulate toluene and m-xylene oxidation in chamber experiments over a range of NOx conditions. The output from such a mechanism can be used in thermodynamic predictions of gas-particle partitioning leading to SOA. The current work reports the development of a model for SOA formation that combines the gas-phase mechanism of Xu et al. (2015) with an updated lumped SOA-partitioning scheme (Model to Predict the Multi-phase Partitioning of Organics, MPMPO) that allows partitioning to multiple aerosol phases and that is designed for use in larger-scale three-dimensional models. The resulting model is termed aroCACM/MPMPO 1.0. The model is integrated into the University of California, Irvine - California Institute of Technology (UCI-CIT) Airshed Model, which simulates the South Coast Air Basin (SoCAB) of California. Simulations using 2012 emissions indicate that "low-NOx" pathways to SOA formation from aromatic oxidation play an important role, even in regions that typically exhibit high-NOx concentrations.

  17. Regional photochemical air quality modeling in the Mexico-US border area

    SciTech Connect

    Mendoza, A.; Russell, A.G.; Mejia, G.M.

    1998-12-31

    The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts the model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.

  18. Inclusion in the Workforce for Students with Intellectual Disabilities: A Case Study of a Spanish Postsecondary Education Program

    ERIC Educational Resources Information Center

    Judge, Sharon; Gasset, Dolores Izuzquiza

    2015-01-01

    The Autonomous University of Madrid (UAM) is the first Spanish university to provide training to young people with intellectual disabilities (ID) in the university environment, which qualifies them for inclusion in the workforce. In this practice brief we describe the UAM-Prodis Patronage Chair program, a successful model used at Spanish…

  19. Assembling a biogenic hydrocarbon emissions inventory for the SCOS97-NARSTO modeling domain

    SciTech Connect

    Benjamin, M.T.; Winer, A.M.; Karlik, J.; Campbell, S.; Jackson, B.; Lashgari, A.

    1998-12-31

    To assist in developing ozone control strategies for Southern California, the California Air Resources Board is developing a biogenic hydrocarbon (BHC) emissions inventory model for the SCOS97-NARSTO domain. The basis for this bottom-up model is SCOS97-NARSTO-specific landuse and landcover maps, leafmass constants, and BHC emission rates. In urban areas, landuse maps developed by the Southern California Association of Governments, San Diego Association of Governments, and other local governments are used while in natural areas, landcover and plant community databases produced by the GAP Analysis Project (GAP) are employed. Plant identities and canopy volumes for species in each landuse and landcover category are based on the most recent botanical field survey data. Where possible, experimentally determined leafmass constant and BHC emission rate measurements reported in the literature are used or, for those species where experimental data are not available, values are assigned based on taxonomic methods. A geographic information system is being used to integrate these databases, as well as the most recent environmental correction algorithms and canopy shading factors, to produce a spatially- and temporally-resolved BHC emission inventory suitable for input into the Urban Airshed Model.

  20. Development of particulate matter transfer coefficients using a three-dimensional air quality model

    SciTech Connect

    Seigneur, C.; Tonne, C.; Vijayaraghavan, K.; Pai, P.; Levin, L.

    1999-07-01

    Air quality model simulations constitute an effective approach to develop source-receptor relationships (so-called transfer coefficients in the risk analysis framework) because a significant fraction of particulate matter (particularly PM{sub 2.5}) is secondary and, therefore, depends on the atmospheric chemistry of the airshed. These source-receptor relationships can be made specific to source regions and major pollutants. In this study, the authors have used a comprehensive three-dimensional air quality model for PM (SAQM-AERO) to generate episodic transfer coefficients for several source regions in the Los Angeles basin (i.e., surface coastal region, elevated coastal region, central basin, and downwind region). Transfer coefficients were developed by conducting PM air quality simulations with reduced emissions of one of the four precursors (i.e., primary PM, SO{sub 2}, NO{sub x}, and VOC) from each source region. The authors have also compared the transfer coefficients generated from explicit modeling with those based on expert judgment, which were obtained by integrating information from the development of the baseline simulation and across-the-board emission reduction simulations.

  1. Modeling the convective transport of pollutants from eastern Colorado, USA into Rocky Mountain National Park

    NASA Astrophysics Data System (ADS)

    Pina, A.; Schumacher, R. S.; Denning, S.

    2015-12-01

    Rocky Mountain National Park (RMNP) is a Class I Airshed designated under the Clean Air Act. Atmospheric nitrogen (N) deposition in the Park has been a known problem since weekly measurements of wet deposition of inorganic N began in the 1980s by the National Atmospheric Deposition Program (NADP). The addition of N from urban and agriculture emissions along the Colorado Front Range to montane ecosystems degrades air quality/visibility, water quality, and soil pH levels. Based on NADP data during summers 1994-2014, wet N deposition at Beaver Meadows in RMNP exhibited a bimodal gamma distribution. In this study, we identified meteorological transport mechanisms for 3 high wet-N deposition events (all events were within the secondary peak of the gamma distribution) using the North American Regional Reanalysis (NARR) and the Weather Research and Forecasting (WRF) model. The NARR was used to identify synoptic-scale influences on the transport; the WRF model was used to analyze the convective transport of pollutants from a concentrated animal feeding operation near Greeley, Colorado, USA. The WRF simulation included a passive tracer from the feeding operation and a convection-permitting horizontal spacing of 4/3 km. The three cases suggest (a) synoptic-scale moisture and flow patterns are important for priming summer transport events and (b) convection plays a vital role in the transport of Front Range pollutants into RMNP.

  2. A comparative analysis of the model calculated and GPS-observed TEC variations before the Haiti, 2010 and Japan, 2011 earthquakes

    NASA Astrophysics Data System (ADS)

    Namgaladze, Alexander; Karpov, Mikhail; Zolotov, Oleg

    2013-04-01

    Model simulations of the ionosphere Total Electron Content (TEC) variations have been performed for the Haiti January 12, 2010 and Japan March 11, 2011 earthquakes. Calculations have been carried out using the global numerical Upper Atmosphere Model (UAM). The seismogenic impacts in the model have been set as lower boundary conditions for the electric potential equation. Namely, the vertical electric currents of ~ 20 nA/m2 flowing from the ionosphere to the Earth have been set at the near-epicenter area of ~ 250 by 2000 km. The simulated relative (%) TEC disturbances for both events have been compared to each other and to the corresponding GPS-observed data. The common features persisting at both observed and modeled TEC variations are: (1) the appearance of positive disturbances 20 - 40% by magnitude at night hours for 2 - 4 days before the earthquake, (2) the geomagnetic conjugation of the effects and (3) the lack of migration (movements) of the TEC deviations during their lifetime (of ~ 8 hours). Main differences between the considered events (Haiti and Japan), both modeled and observed, are most evidently pronounced in the TEC disturbances' maximum location relative to the geomagnetic equator. In case of the Haiti earthquake the strongest by magnitude TEC disturbances are located near the magnetically conjugated to the earthquake's epicenter region at the Southern hemisphere, while in case of the Japan earthquake - near the epicenter at the Northern hemisphere. We have attributed this difference to the different seasons the events have taken place in. The asymmetry of the Haiti model TEC disturbances relative to the magnetic meridian of the earthquake's epicenter is in agreement with the GPS-observed one. In case of the Japan earthquake the asymmetry of the TEC deviations relative to the magnetic meridian of the earthquake's epicenter is negligible in the observations, while in the model results it is similar to the Haiti case. In order to remove this asymmetry

  3. Modeling total reduced sulfur and sulfur dioxide emissions from a kraft recovery boiler using an artificial neural network, and, Investigating volatile organic compounds in an urban intermountain valley using a TD/GC/MS methodology and intrinsic tracer molecules

    NASA Astrophysics Data System (ADS)

    Wrobel, Christopher Louis

    2000-11-01

    Back-propagation neural networks were trained to predict total reduced sulfur (TRS) and SO2 emissions from kraft recovery boiler operational data. A 0.721 coefficient of correlation was achieved between actual and predicted sulfur emissions on test data withheld from network training. The artificial neural network (ANN) models found an inverse, linear relationship between TRS/SO2 emissions and percent opacity. A number of relationships among operating parameters and sulfur emissions were identified by the ANN models. These relationships were used to formulate strategies for reducing sulfur emissions. Disagreement between ANN model predictions on a subsequent data set revealed an additional scenario for sulfur release not present in the training data. ANN modeling was demonstrated to be an effective tool for analyzing process variables when balancing productivity and environmental concerns. Five receptor sites distributed in the Missoula Valley, Montana, were employed to investigate possible VOC (benzene, 2,3,4-trimethylpentane, toluene, ethylbenzene, m-/p-xylene, o-xylene, naphthalene, acetone, chloroform, α-pinene, β-pinene, p-cymene and limonene) sources. The most dominant source of VOCs was found to be vehicle emissions. Furthermore, anthropogenic sources of terpenoids overwhelmed biogenic emissions, on a local scale. Difficulties correlating wind direction and pollutant levels could be explained by wind direction variability, low wind speed and seasonally dependent meteorological factors. Significant evidence was compiled to support the use of p-cymene as a tracer molecule for pulp mill VOC emissions. Apportionment techniques using o-xylene and p-cymene as tracers for automobile and pulp mill emissions, respectively, were employed to estimate each source's VOC contribution. Motor vehicles were estimated to contribute between 56 and 100 percent of the aromatic pollutants in the Missoula Valley airshed, depending upon the sampling location. Pulp mill emissions

  4. Remote sensing and hydrological modeling of burn scars

    NASA Astrophysics Data System (ADS)

    Miller, Mary Ellen

    This study examined the potential usefulness of combining remote sensing data with hydrologic models and mapping tools available from Geographic Information Systems (GIS), to evaluate the effects of wildfire. Four subprojects addressed this issue: (1) validation of burn scar maps derived from the Advanced Very High Resolution Radiometer (AVHRR) with the National Fire Occurrence Database; (2) testing the potential of thermal MODIS (Moderate Resolution Imaging Spectroradiometer) data for near-real time burn scar and fire severity mapping; (3) evaluation of Landsat derived burn severity maps within WEPP through the Geo-spatial interface for the Water Erosion Prediction Project (GeoWEPP), and (4) predicting potential post-fire erosion for western U.S. forests utilizing existing datasets and models. Wildfire poses incredibly complex management problems in all of its stages. Today's land managers have the option of trying to mitigate the effects of a severe fire before it occurs by fuel management practices. This process is expensive especially considering the uncertainty of when and where the next fire in a given region will occur. When a wildfire does occur, deciding when to let it burn and when to suppress it may lead to controversial decisions. In addition to the threat to life and property from the fire itself, smoke emissions from large fires can cause air quality problems in distant airsheds. Even after the fire is extinguished, erosion and water quality problems may pose difficult management questions. Contributions stemming from these studies include improved burn scar maps for studying historical fire extent and demonstration of the feasibility of using thermal satellite data to predict burn scar extent when clouds and smoke obscure visible bands. The incorporation of Landsat derived burn severity maps was shown to improve post-fire erosion modeling results. Finally the potential post-fire burn severity and erosion risk maps generated for western US forests

  5. Numerical modeling of solar wind influences on the dynamics of the high-latitude upper atmosphere

    NASA Astrophysics Data System (ADS)

    Förster, M.; Prokhorov, B. E.; Namgaladze, A. A.; Holschneider, M.

    2012-09-01

    Neutral thermospheric wind patterns at high latitudes obtained from cross-track acceleration measurements of the CHAMP satellite above both polar regions are used to deduce statistical neutral wind vorticity distributions and were analyzed in their dependence on the Interplanetary Magnetic Field (IMF). The average pattern confirms the large duskside anticyclonic vortex seen in the average wind pattern and reveals a positive (cyclonic) vorticity on the dawnside, which is almost equal in magnitude to the duskside negative one. The IMF dependence of the vorticity pattern resembles the characteristic field-aligned current (FAC) and ionospheric plasma drift pattern known from various statistical studies obtained under the same sorting conditions as, e.g., the EDI Cluster statistical drift pattern. There is evidence for hemispheric differences in the average magnitudes of the statistical patterns both for plasma drift and even more for the neutral wind vorticity. The paper aims at a better understanding of the globally interconnected complex plasma physical and electrodynamic processes of Earth's upper atmosphere by means of first-principle numerical modeling using the Upper Atmosphere Model (UAM). The simulations of, e.g., thermospheric neutral wind and mass density at high latitudes are compared with CHAMP observations for varying IMF conditions. They show an immediate response of the upper atmosphere and its high sensitivity to IMF changes in strength and orientation.

  6. Aluminum-matrix composites with embedded Ni-Ti wires by ultrasonic consolidation

    NASA Astrophysics Data System (ADS)

    Hahnlen, Ryan; Dapino, Marcelo J.; Short, Matt; Graff, Karl

    2009-03-01

    [Smart Vehicle Workshop] This paper presents the development of active aluminum-matrix composites manufactured by Ultrasonic Additive Manufacturing (UAM), an emerging rapid prototyping process based on ultrasonic metal welding. Composites created through UAM experience process temperatures as low as 20°C, in contrast to current metal-matrix fabrication processes which require fusion of materials and hence reach temperatures of 500°C and above. UAM thus creates unprecedented opportunities to develop adaptive structures with seamlessly embedded smart materials and electronic components without degrading the properties that make embedding these materials and components attractive. This research focuses on three aspects of developing UAM Ni-Ti/Al composites which have not been accomplished before: (i) Characterization of the mechanical properties of the composite matrix; (ii) Investigation of Ni-Ti/Al composites as tunable stiffness materials and as strain sensors based on the shape memory effect; and (iii) Development of constitutive models for UAM Ni-Ti/Al composites. The mechanical characterization shows an increase in tensile strength of aluminum UAM builds over the parent material (Al 3003-H18), likely due to grain refinement caused by the UAM process. We demonstrate the ability to embed Ni-Ti wires up to 203 μm in diameter in an aluminum matrix, compared with only 100 μm in previous studies. The resulting Ni-Ti/Al UAM composites have cross sectional area ratios of up to 13.4% Ni-Ti. These composites exhibit a change in stiffness of 6% and a resistivity change of -3% when the Ni- Ti wires undergo martensite to austenite transformation. The Ni-Ti area ratios and associated strength of the shape memory effect are expected to increase as the UAM process becomes better understood and is perfected. The Brinson constitutive model for shape memory transformations is used to describe the stiffness and the strain sensing of Ni-Ti/Al composites in response to

  7. Validation of WRF/Chem model and sensitivity of chemical mechanisms to ozone simulation over megacity Delhi

    NASA Astrophysics Data System (ADS)

    Gupta, Medhavi; Mohan, Manju

    2015-12-01

    Regional Chemical transport models (CTM) are used extensively for modeling of Ozone concentration. WRF/Chem is one such CTM that includes various chemical mechanisms and is used for simulation of Ozone and other pollutant concentration at desired time step. This study focuses on the robustness of WRF/Chem simulated Ozone concentrations over a sub tropical urban airshed of megacity Delhi. Detailed analysis has been presented for the veracity of two different chemical mechanisms namely; Carbon Bond Mechanism (CBMZ) and Regional Atmospheric Chemical Model (RACM). It was observed that simulated Ozone concentrations are better predicted with CBMZ mechanism and is highly sensitive to the rate constants. The Ozone concentrations are analyzed for precursors such as Oxides of Nitrogen (NOx) and Carbon-monoxide (CO) as well as temperature considering their strong dependence on Ozone formation. A consistent positive correlation between Ozone concentration and temperature is noted whereas; NOx and CO show inverse relationship with Ozone. Further, Ozone concentration range vis-à-vis model performance is scrutinized. A poor model for low Ozone concentration levels is observed and a highly satisfactory for moderate Ozone concentration levels while satisfactory for higher Ozone levels. Despite of the limitations observed during model evaluation of Ozone predictions for low Ozone levels, it is concluded that WRF/Chem could effectively be applied for understanding its trends, tropospheric chemistry and air quality assessment for regulatory purposes at moderate Ozone concentration levels. Further, it is recommended that model implementation shall be made for policy decisions cautiously with due consideration to the magnitudes of Ozone levels present in the study domain and the performance measures in the specific concentration range.

  8. A multivariate/chemical mass balance model for air pollution in China: A hybrid methodology

    SciTech Connect

    Zelenka, M.P.

    1992-01-01

    This research explores the possibility of using a two step method of identifying and quantifying air pollution emissions in an urban environment. The procedure uses a mathematical model called Target Transformation Factor Analysis (TTFA) to estimate source profiles using ambient trace element air concentration data. A source profile is analogous to a fingerprint since it is unique to each source of air pollution. It is important to use source profiles that are measured or estimated for the specific location under study. The profiles estimated by TTFA are then employed in a Chemical Mass Balance (CMB) source apportionment analysis for the airshed. Other known sources are estimated using source signatures from the literature. Applying the TTFA and CMB models in this fashion is called receptor modeling. Generically, a receptor model is the combination of measured air pollution concentration data with a numerical technique which apportions the measured air pollution among distinct source types. The results show that TTFA can be used to provide quantitative estimates of air pollution source profiles for an urban center in China. The number of profiles for unique source types was limited for this data set since emissions from certain types of sources co-varied during each sampling day. Consequently, the CMB analyses that applied the TTFA source profiles needed to be supplemented with standard US EPA source profiles. The application of TTFA for estimating source profiles from ambient data and the subsequent use of those profiles in CMB analyses with source profiles obtained from the EPA's source library can improve the statistical quality of the source apportionment analysis. TTFA can identify source categories of airborne pollution for specific cities, as well as give quantitative data on the composition of the emissions from those source types.

  9. Inverse Modeling for the anthropogenic emission estimation by 4D-var

    NASA Astrophysics Data System (ADS)

    Paschalidi, Zoi; Elbern, Hendrik; Friese, Elmar; Kasradze, Ketevan

    2014-05-01

    A key to better understanding the complex atmospheric processes is the quantitative determination of the emission patterns. Emissions of urban areas contribute to influence greater areas than the actual city area. These influences can be local (air-pollution, heat islands), regional (air-pollution, precipitation) or potentially global (long range transport and vertical transport into the UTLS levels). That is why our study deals with the estimations of the anthropogenic emission strength and the chemical evolution of urban air-sheds, using the techniques of Inverse Modeling. The chemical four-dimensional data assimilation system that is applied within the European Air Pollution and Dispersion chemistry-transport model and its inverse modeling module (EURAD-IM), is already generalized by including not only chemical state estimates, but also emission rate optimization. This achievement follows the objectives of data assimilation and provides an accurate and consistent image of the atmosphere. In the present work an extension of this state is done for a better estimation of the emission factors. A novel approach has been produced, which gives the best estimation of the multivariate covariance matrices of the optimization problem. This is done by updating the emission factor error covariance matrix to 24 emitted species, as well as by adapting the adjoint code to the new online emission model by TNO. Joint optimization of the initial values of the chemical constituents and the emission rates takes place in the study of two special episodes in North Rhine-Westphalia (NRW) region in Germany: an ozone episode in July 2010 and an aerosol episode during the winter of 2012. The nesting techniques that are applied go from 15 km resolution, for the European domain, down to 5 km resolution for Central Europe, and is scheduled to end with 1 km resolution for NRW, implying an indispensable knowledge of the emission patterns. For the assimilation, different kind of observational

  10. An investigation into using the CALMET/CALPUFF modeling system for assessing atmospheric nitrogen deposition in the Chesapeake Bay

    SciTech Connect

    Sherwell, J.; Garrison, M.

    1997-12-31

    The Maryland Department of Natural Resources Power Plant Research Program (PPRP) has a long-standing interest in the water quality of the Chesapeake Bay. A plan has been developed for the ten tributary regions in Maryland that feed into the Chesapeake Bay. Possible reductions in NO{sub x} deposition rates achievable from reductions in airborne NO{sub x} due to Clean Air Act mandates for power plants are of interest in helping to meet the nutrient reduction targets. The Regional Acid Deposition Model (RADM) has been used to estimate NO{sub x} deposition quantities and the extent of the airshed for the Chesapeake Bay. The CALMET/CALPUFF modeling system, recently made available to the public via EPA`s Technology Transfer Network (TTN), is a meteorological and concentration/deposition modeling system that offers a great deal of flexibility for modeling airborne NO{sub x} deposition and for possibly complementing the RADM analyses. A study by PPRP is underway to explore different ways in which the CALMET/CALPUFF modeling system can provide insights into magnitudes, sources, and possible reductions of NO{sub x} deposition to the Bay. The Penn State/NCAR Mesoscale Model (MM4) gridded data set for 1990 has been used for meteorological inputs, and EPA`s 1990 National Emissions Inventory for NO{sub x} has been used to derive source inputs. The CALPUFF analysis is being conducted to provide information in three primary areas: first, detailed deposition estimates for the northern part of the Chesapeake Bay around Baltimore; second, source or source group-specific estimates of deposition in the receptor region for both local and distant sources; and third, time series of deposition patterns throughout the receptor region. This paper reports on the experiences gained in preparing and running the CALMET/CALPUFF system, and on the preliminary results of the analysis of NO{sub x} deposition to the Chesapeake Bay.

  11. Measurements of air pollution in a non-urban, non-agricultural airshed

    SciTech Connect

    Eldering, A.; Glasgow, R.; Cole, A.; Edwards, T.R.

    1996-12-31

    Pocatello, Idaho has an air quality problem that is unique, challenging, and complex. Pocatello is a city of about 46,000 located in southeastern Idaho, almost completely surrounded by national forests and an Indian reservation. It suffers from a particulate matter air pollution problem with 13 exceedances of the federal particulate air quality standard (a 24-hr average of 15- {mu}g/m{sup 3} of PM10) since 1986. In relation to other cities with a significant particulate air quality problem, Pocatello is not large and unlike the six counties which had more particulate matter exceedances than Pocatello, it is neither urban or highly agricultural. Pocatello does have an industrial area, and among the facilities present are manufacturing plants for phosphate fertilizer and for elemental phosphorus. Preliminary investigations show two of these exceedances are associated with fog and low speed winds. It has been suggested that secondary aerosols may account for a large fraction of PM 10 material. In addition, these episodes have been observed to range in duration from a few hours to a period of days, with the geography of the are contributing to complex meteorology.

  12. Environmental factors affecting the levels of legacy pesticides in the airshed of Delaware and Chesapeake Bays

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Weekly air (n=271) and event based rain samples (n=489) collected for the period 2000-2003 from three locations in the Delmarva Peninsula (MD and DE) were utilized to determine levels and temporal trends of legacy pesticides in the atmosphere. The goal was to assess the contribution of atmospheric i...

  13. EXPOSURE TO VOLATILE ORGANIC COMPOUNDS MEASURED IN A SOURCE IMPACTED AIRSHED

    EPA Science Inventory

    A three-year exposure monitoring study is being conducted in a large city in the Midwestern U.S. The study is aimed at determining the factors influencing exposures to air pollutants of outdoor origin, including volatile organic compounds (VOCs) and particulate matter.

  14. POLAR ORGANIC COMPOUNDS IN FINE PARTICLES FROM THE NEW YORK, NEW JERSEY, AND CONNECTICUT REGIONAL AIRSHED

    EPA Science Inventory

    Five key scientific questions guiding this research were explored. They are given here with results generated from the project.
     
    B.1.        How can polar organic compounds be measured in atmospheric fine particulate matter? Is there potential a...

  15. The sensitivity of PM2.5 source-receptor relationships to atmospheric chemistry and transport in a three-dimensional air quality model.

    PubMed

    Seigneur, C; Tonne, C; Vijayaraghavan, K; Pal, P; Levin, L

    2000-03-01

    Air quality model simulations constitute an effective approach to developing source-receptor relationships (so-called transfer coefficients in the risk analysis framework) because a significant fraction of particulate matter (particularly PM2.5) is secondary (i.e., formed in the atmosphere) and, therefore, depends on the atmospheric chemistry of the airshed. In this study, we have used a comprehensive three-dimensional air quality model for PM2.5 (SAQM-AERO) to compare three approaches to generating episodic transfer coefficients for several source regions in the Los Angeles Basin. First, transfer coefficients were developed by conducting PM2.5 SAQM-AERO simulations with reduced emissions of one of four precursors (i.e., primary PM, sulfur dioxide (SO2), oxides of nitrogen (NOx), and volatile organic compounds) from each source region. Next, we calculated transfer coefficients using two other methods: (1) a simplified chemistry for PM2.5 formation, and (2) simplifying assumptions on transport using information limited to basin-wide emission reductions. Transfer coefficients obtained with the simplified chemistry were similar to those obtained with the comprehensive model for VOC emission changes but differed for NOx and SOz emission changes. The differences were due to the parameterization of the rates of secondary PM formation in the simplified chemistry. In 90% of the cases, transfer coefficients estimated using only basin-wide information were within a factor of two of those obtained with the explicit source-receptor simulations conducted with the comprehensive model. The best agreement was obtained for VOC emission changes; poor agreement was obtained for primary PM2.5. PMID:10734714

  16. Characterization of Steel-Ta Dissimilar Metal Builds Made Using Very High Power Ultrasonic Additive Manufacturing (VHP-UAM)

    NASA Astrophysics Data System (ADS)

    Sridharan, Niyanth; Norfolk, Mark; Babu, Sudarsanam Suresh

    2016-05-01

    Ultrasonic additive manufacturing is a solid-state additive manufacturing technique that utilizes ultrasonic vibrations to bond metal tapes into near net-shaped components. The major advantage of this process is the ability to manufacture layered structures with dissimilar materials without any intermetallic formation. Majority of the published literature had focused only on the bond formation mechanism in Aluminum alloys. The current work pertains to explain the microstructure evolution during dissimilar joining of iron and tantalum using very high power ultrasonic additive manufacturing and characterization of the interfaces using electron back-scattered diffraction and Nano-indentation measurement. The results showed extensive grain refinement at the bonded interfaces of these metals. This phenomenon was attributed to continuous dynamic recrystallization process driven by the high strain rate plastic deformation and associated adiabatic heating that is well below 50 pct of melting point of both iron and Ta.

  17. Structural trends and bonding of the 5ƒ-elements (U-Am) with the oxoligand I03-.

    SciTech Connect

    Bean, A. C.; Scott, B. L.; Albrecht-Schmitt, T. E.; Runde, W. H.

    2003-01-01

    The solid state chemistry of transuranium compounds has received considerably less attention than their uranium analogs owing to decreased availability and the highly specialized facilities needed to safely study long-lived {alpha}-emitters. However, understanding the behavior of the early transuranium elements is critical for assessing their environmental impact as long-term contributors to radioactive dose in nuclear waste repositories. Of particular interest is how these elements might react with fission product radionuclides such as 129I, or their derivatives. In fact, iodine can exist in solution in both oxidized and reduced forms, i .e. as I03 and F, and studies on the nature of129I in nuclear waste suggested the existence of iodate, 103.2

  18. Modeling

    SciTech Connect

    Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

    2005-09-01

    Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

  19. Mathematical Modeling of Photochemical Air Pollution.

    NASA Astrophysics Data System (ADS)

    McRae, Gregory John

    Air pollution is an environmental problem that is both pervasive and difficult to control. An important element of any rational control approach is a reliable means for evaluating the air quality impact of alternative abatement measures. This work presents such a capability, in the form of a mathematical description of the production and transport of photochemical oxidants within an urban airshed. The combined influences of advection, turbulent diffusion, chemical reaction, emissions and surface removal processes are all incorporated into a series of models that are based on the species continuity equations. A delineation of the essential assumptions underlying the formulation of a three-dimensional, a Lagrangian trajectory, a vertically integrated and single cell air quality model is presented. Since each model employs common components and input data the simpler forms can be used for rapid screening calculations and the more complex ones for detailed evaluations. The flow fields, needed for species transport, are constructed using inverse distance weighted polynomial interpolation techniques that map routine monitoring data onto a regular computational mesh. Variational analysis procedures are then employed to adjust the field so that mass is conserved. Initial concentration and mixing height distributions can be established with the same interpolation algorithms. Subgrid scale turbulent transport is characterized by a gradient diffusion hypothesis. Similarity solutions are used to model the surface layer fluxes. Above this layer different treatments of turbulent diffusivity are required to account for variations in atmospheric stability. Convective velocity scaling is utilized to develop eddy diffusivities for unstable conditions. The predicted mixing times are in accord with results obtained during sulfur hexafluoride (SF(,6)) tracer experiments. Conventional models are employed for neutral and stable conditions. A new formulation for gaseous deposition fluxes

  20. Uncertainty characterization and quantification in air pollution models. Application to the ADMS-Urban model.

    NASA Astrophysics Data System (ADS)

    Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.

    2009-04-01

    uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  1. Ionosphere TEC disturbances before strong earthquakes: observations, physics, modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Namgaladze, A. A.

    2013-12-01

    The phenomenon of the pre-earthquake ionospheric disturbances is discussed. A number of typical TEC (Total Electron Content) relative disturbances is presented for several recent strong earthquakes occurred in different ionospheric conditions. Stable typical TEC deviations from quiet background state are observed few days before the strong seismic events in the vicinity of the earthquake epicenter and treated as ionospheric earthquake precursors. They don't move away from the source in contrast to the disturbances related with geomagnetic activity. Sunlit ionosphere approach leads to reduction of the disturbances up to their full disappearance, and effects regenerate at night. The TEC disturbances often observed in the magnetically conjugated areas as well. At low latitudes they accompany with equatorial anomaly modifications. The hypothesis about the electromagnetic channel of the pre-earthquake ionospheric disturbances' creation is discussed. The lithosphere and ionosphere are coupled by the vertical external electric currents as a result of ionization of the near-Earth air layer and vertical transport of the charged particles through the atmosphere over the fault. The external electric current densities exceeding the regular fair-weather electric currents by several orders are required to produce stable long-living seismogenic electric fields such as observed by onboard measurements of the 'Intercosmos-Bulgaria 1300' satellite over the seismic active zones. The numerical calculation results using the Upper Atmosphere Model demonstrate the ability of the external electric currents with the densities of 10-8-10-9 A/m2 to produce such electric fields. The sumulations reproduce the basic features of typical pre-earthquake TEC relative disturbances. It is shown that the plasma ExB drift under the action of the seismogenic electric field leads to the changes of the F2 region electron number density and TEC. The upward drift velocity component enhances NmF2 and TEC and

  2. Winter season air pollution in El Paso-Ciudad Juarez. A review of air pollution studies in an international airshed

    SciTech Connect

    Einfeld, W.; Church, H.W.

    1995-03-01

    This report summarizes a number of research efforts completed over the past 20 years in the El Paso del Norte region to characterize pollution sources and air quality trends. The El Paso del Norte region encompasses the cities of El Paso, Texas and Ciudad Juarez, Chihuahua and is representative of many US-Mexico border communities that are facing important air quality issues as population growth and industrialization of Mexican border communities continue. Special attention is given to a group of studies carried out under special US Congressional funding and administered by the US Environmental Protection Agency. Many of these studies were fielded within the last several years to develop a better understanding of air pollution sources and trends in this typical border community. Summary findings from a wide range of studies dealing with such issues as the temporal and spatial distribution of pollutants and pollution potential from both stationary and mobile sources in both cities are presented. Particular emphasis is given to a recent study in El Paso-Ciudad Juarez that focussed on winter season PM{sub 10} pollution in El Paso-Ciudad Juarez. Preliminary estimates from this short-term study reveal that biomass combustion products and crustal material are significant components of winter season PM{sub 10} in this international border community.

  3. SURVEY OF VOLATILE ORGANIC COMPOUNDS ASSOCIATED WITH AUTOMOTIVE EMISSIONS IN THE URBAN AIRSHED OF SAO PAULO, BRAZIL

    EPA Science Inventory

    The Metropolitan Region of Sao Paulo (MRSP), Brazil, is one of the largest metropolitan areas in the world (population 17 million, approx.) and relies heavily on alcohol-based fuels for automobiles. It is estimated that about 40% of the total volume of fuel is ethanol with som...

  4. Characterization of primary and secondary organic aerosols in Melbourne airshed: The influence of biogenic emissions, wood smoke and bushfires

    NASA Astrophysics Data System (ADS)

    Iinuma, Yoshiteru; Keywood, Melita; Herrmann, Hartmut

    2016-04-01

    Detailed chemical characterisation was performed for wintertime and summertime PM10 samples collected in Melbourne, Australia. The samples were analysed for marker compounds of biomass burning and biogenic secondary organic aerosol (SOA). The chemical analysis showed that the site was significantly influenced by the emissions from wintertime domestic wood combustion and summertime bushfires. Monosaccharide anhydrides were major primary biomass burning marker compounds found in the samples with the average concentrations of 439, 191, 57 and 3630 ngm-3 for winter 2004, winter 2005, summer 2005 and summer 2006, respectively. The highest concentration was determined during the summer 2006 bushfire season with the concentration of 15,400 ngm-3. Biomass burning originating SOA compounds detected in the samples include substituted nitrophenols, mainly 4-nitrocatechol (Mr 155), methyl-nitrocatechols (Mr 169) and dimethyl-nitrocatechols (Mr 183) with the sum concentrations as high as 115 ngm-3 for the wintertime samples and 770 ngm-3 for the bushfire influenced samples. In addition to this, elevated levels of biogenic SOA marker compounds were determined in the summertime samples influence by bushfire smoke. These marker compounds can be categorised into carboxylic acid marker compounds and heteroatomic organic acids containing nitrogen and sulfur. Carboxylic acid marker compounds can be largely attributed to oxidation products originating from 1,8-cineole, α-pinene and β-pinene that are main constituents of eucalyptus VOC emissions. Among those, diaterpenylic acid, terpenylic acid and daterebic acid were found at elevated levels in the bushfire influenced samples. Heteroatomic monoterpene SOA marker compounds (Mr 295, C10H17NO7S) were detected during both winter and summer periods. Especially high levels of these compounds were determined in the severe bushfire samples from summer 2006. Based on the results obtained from the chemical analysis and a macro tracer method, we estimated that 1,8-cineole SOA alone contributed up to 3.5% of secondary organic carbon mass during the bushfire period in 2006. It is likely that biogenic VOC oxidation can be an important source of biomass burning organic aerosol mass.

  5. Optimal welding parameters for very high power ultrasonic additive manufacturing of smart structures with aluminum 6061 matrix

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.; Hehr, Adam; Dapino, Marcelo J.

    2014-03-01

    Ultrasonic additive manufacturing (UAM) is a recent solid state manufacturing process that combines ad- ditive joining of thin metal tapes with subtractive milling operations to generate near net shape metallic parts. Due to the minimal heating during the process, UAM is a proven method of embedding Ni-Ti, Fe-Ga, and PVDF to create active metal matrix composites. Recently, advances in the UAM process utilizing 9 kW very high power (VHP) welding has improved bonding properties, enabling joining of high strength materials previously unweldable with 1 kW low power UAM. Consequently, a design of experiments study was conducted to optimize welding conditions for aluminum 6061 components. This understanding is critical in the design of UAM parts containing smart materials. Build parameters, including weld force, weld speed, amplitude, and temperature were varied based on a Taguchi experimental design matrix and tested for me- chanical strength. Optimal weld parameters were identi ed with statistical methods including a generalized linear model for analysis of variance (ANOVA), mean e ects plots, and interaction e ects plots.

  6. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    SciTech Connect

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  7. Solar cycle variation of Mars exospheric temperatures: Critical review of available dayside measurements and recent model simulations

    NASA Astrophysics Data System (ADS)

    Bougher, Stephen; Huestis, David

    The responses of the Martian dayside thermosphere to solar flux variations (on both solar rotation and solar cycle timescales) have been the subject of considerable debate and study for many years. Available datasets include: Mariner 6,7,9 (UVS dayglow), Viking Lander 1-2 (UAMS densities upon descent), several aerobraking campaigns (MGS, Odyssey, MRO densities), and Mars Express (SPICAM dayglow). Radio Science derived plasma scale heights near the ionospheric peak can be used to derive neutral temperatures in this region (only); such values are not applicable to exobase heights (e.g. Forbes et al. 2008; Bougher et al. 2009). Recently, densities and temperatures derived from precise orbit determination of the MGS spacecraft (1999-2005) have been used to establish the responses of Mars' exosphere to long-term solar flux variations (Forbes et al., 2008). From this multi-year dataset, dayside exospheric temperatures weighted toward moderate southern latitudes are found to change by about 120 K over the solar cycle. However, the applicability of these drag derived exospheric temperatures to near solar minimum conditions is suspect (e.g Bruinsma and Lemoine, 2002). Finally, re-evaluation of production mechanisms for UV dayglow emissions implies revised values for exospheric temperatures (e.g. Simon et al., 2009; Huestis et al. 2010). Several processes are known to influence Mars' exospheric temperatures and their variability (Bougher et al., 1999; 2000; 2009). Solar EUV heating and its variations with solar fluxes received at Mars, CO2 15-micron cooling, molecular thermal conduction, and hydrodynamic heating/cooling associated with global dynamics all contribute to regulate dayside thermo-spheric temperatures. Poorly measured dayside atomic oxygen abundances render CO2 cooling rates uncertain at the present time. However, global thermospheric circulation models can be exercised for conditions spanning the solar cycle and Mars seasons to address the relative roles of

  8. Validation of the emission inventory in the Sao Paulo Metropolitan Area of Brazil, based on ambient concentrations ratios of CO, NMOG and NO x and on a photochemical model

    NASA Astrophysics Data System (ADS)

    Vivanco, Marta G.; Andrade, Maria de Fátima

    than the official, emission inventories are used, ozone and NO concentrations predicted by the California Institute of Technology (CIT) airshed model more closely match observed values.

  9. Uncertainty characterization and quantification in air pollution models. Application to the CHIMERE model

    NASA Astrophysics Data System (ADS)

    Debry, Edouard; Mallet, Vivien; Garaud, Damien; Malherbe, Laure; Bessagnet, Bertrand; Rouïl, Laurence

    2010-05-01

    probability density function (PDF) is associated with an input parameter, according to its assumed uncertainty. Then the combined PDFs are propagated into the model, by means of several simulations with randomly perturbed input parameters. One may then obtain an approximation of the PDF of modeled concentrations, provided the Monte Carlo process has reasonably converged. The uncertainty analysis with CHIMERE has been led with a Monte Carlo method on the French domain and on two periods : 13 days during January 2009, with a focus on particles, and 28 days during August 2009, with a focus on ozone. The results show that for the summer period and 500 simulations, the time and space averaged standard deviation for ozone is 16 µg/m3, to be compared with an averaged concentration of 89 µg/m3. It is noteworthy that the space averaged standard deviation for ozone is relatively constant over time (the standard deviation of the timeseries itself is 1.6 µg/m3). The space variation of the ozone standard deviation seems to indicate that emissions have a significant impact, followed by western boundary conditions. Monte Carlo simulations are then post-processed by both ensemble [4] and Bayesian [5] methods in order to assess the quality of the uncertainty estimation. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the Paris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Mallet, V., and B

  10. Active metal-matrix composites with embedded smart materials by ultrasonic additive manufacturing

    NASA Astrophysics Data System (ADS)

    Hahnlen, Ryan; Dapino, Marcelo J.

    2010-04-01

    This paper presents the development of active aluminum-matrix composites manufactured by Ultrasonic Additive Manufacturing (UAM), an emerging rapid prototyping process based on ultrasonic metal welding. Composites created through this process experience temperatures as low as 25 °C during fabrication, in contrast to current metal-matrix fabrication processes which require temperatures of 500 °C and above. UAM thus provides unprecedented opportunities to develop adaptive structures with seamlessly embedded smart materials and electronic components without degrading the properties that make these materials and components attractive. This research focuses on developing UAM composites with aluminum matrices and embedded shape memory NiTi, magnetostrictive Galfenol, and electroactive PVDF phases. The research on these composites will focus on: (i) electrical insulation between NiTi and Al phases for strain sensors, investigation and modeling of NiTi-Al composites as tunable stiffness materials and thermally invariant structures based on the shape memory effect; (ii) process development and composite testing for Galfenol-Al composites; and (iii) development of PVDF-Al composites for embedded sensing applications. We demonstrate a method to electrically insulate embedded materials from the UAM matrix, the ability create composites containing up to 22.3% NiTi, and their resulting dimensional stability and thermal actuation characteristics. Also demonstrated is Galfenol-Al composite magnetic actuation of up to 54 μ(see manuscript), and creation of a PVDF-Al composite sensor.

  11. Feasibility study of using the RoboEarth cloud engine for rapid mapping and tracking with small unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2014-11-01

    This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.

  12. Contribution of upper airway geometry to convective mixing.

    PubMed

    Jayaraju, Santhosh T; Paiva, Manuel; Brouns, Mark; Lacor, Chris; Verbanck, Sylvia

    2008-12-01

    We investigated the axial dispersive effect of the upper airway structure (comprising mouth cavity, oropharynx, and trachea) on a traversing aerosol bolus. This was done by means of aerosol bolus experiments on a hollow cast of a realistic upper airway model (UAM) and three-dimensional computational fluid dynamics (CFD) simulations in the same UAM geometry. The experiments showed that 50-ml boluses injected into the UAM dispersed to boluses with a half-width ranging from 80 to 90 ml at the UAM exit, across both flow rates (250, 500 ml/s) and both flow directions (inspiration, expiration). These experimental results imply that the net half-width induced by the UAM typically was 69 ml. Comparison of experimental bolus traces with a one-dimensional Gaussian-derived analytical solution resulted in an axial dispersion coefficient of 200-250 cm(2)/s, depending on whether the bolus peak and its half-width or the bolus tail needed to be fully accounted for. CFD simulations agreed well with experimental results for inspiratory boluses and were compatible with an axial dispersion of 200 cm(2)/s. However, for expiratory boluses the CFD simulations showed a very tight bolus peak followed by an elongated tail, in sharp contrast to the expiratory bolus experiments. This indicates that CFD methods that are widely used to predict the fate of aerosols in the human upper airway, where flow is transitional, need to be critically assessed, possibly via aerosol bolus simulations. We conclude that, with all its geometric complexity, the upper airway introduces a relatively mild dispersion on a traversing aerosol bolus for normal breathing flow rates in inspiratory and expiratory flow directions. PMID:18818384

  13. A new dust transport approach to quantify anthropogenic sources of atmospheric PM10 deposition on lakes

    NASA Astrophysics Data System (ADS)

    Weiss, Lee; Thé, Jesse; Gharabaghi, Bahram; Stainsby, Eleanor A.; Winter, Jennifer G.

    2014-10-01

    Windblown dust simulations are one of the most uncertain types of atmospheric transport models. This study presents an integrated PM10 emission, transport and deposition model which has been validated using monitored data. This model characterizes the atmospheric phosphorus load focusing on the major local sources within the Lake Simcoe airshed including paved and unpaved roads, agricultural sources, construction sites and aggregate mining sources. This new approach substantially reduces uncertainty by providing improved estimates of the friction velocities than those developed previously. Modeling improvements were also made by generating and validating an hourly windfield using detailed meteorology, topography and land use data for the study area. The model was used to estimate dust emissions generated in the airshed and to simulate the long-range transport and deposition of PM10 to Lake Simcoe. The deposition results from the model were verified against observed bulk collector phosphorus concentration data for both wet and dry deposition. Bulk collector data from stations situated outside the airshed in a remote, undeveloped area were also compared to determine the background contribution from distant sources.

  14. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2015-11-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin during two years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the Introduction to the GoAmazon2014/5 Special Issue, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the two-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean

  15. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2016-04-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean and

  16. Impact of urbanization and land-use/land-cover change on diurnal temperature range: a case study of tropical urban airshed of India using remote sensing data.

    PubMed

    Mohan, Manju; Kandya, Anurag

    2015-02-15

    Diurnal temperature range (DTR) is an important climate change index. Its knowledge is important to a range of issues and themes in earth sciences central to urban climatology and human-environment interactions. The present study investigates the effect of urbanization on the land surface temperature (LST) based DTR. This study presents spatial and temporal variations of satellite based estimates of annually averaged DTR over megacity Delhi, the capital of India, which are shown for a period of 11 years during 2001-2011 and analyzes this with regard to its land-use/land-cover (LU/LC) changes and population growth. Delhi which witnessed massive urbanization in terms of population growth (decadal growth rate of Delhi during 2001-2011 was 20.96%) and major transformations in the LU/LC (built-up area crossed more than 53%) are experiencing severity in its micro and macroclimate. There was a consistent increase in the areas experiencing DTR below 11°C which typically resembled the 'urban class' viz. from 26.4% in the year 2001 to 65.3% in the year 2011 and subsequently the DTR of entire Delhi which was 12.48°C in the year 2001 gradually reduced to 10.34°C in the year 2011, exhibiting a significant decreasing trend. Rapidly urbanizing areas like Rohini, Dwarka, Vasant Kunj, Kaushambi, Khanjhawala Village, IIT, Safdarjung Airport, etc. registered a significant decreasing trend in the DTR. In the background of the converging DTR, which was primarily due to the increase in the minimum temperatures, a grim situation in terms of potentially net increase in the heat-related mortality rate especially for the young children below 15years of age is envisaged for Delhi. Considering the earlier findings that the level of risk of death remained the highest and longest for Delhi, in comparison to megacities like Sao Paulo and London, the study calls for strong and urgent heat island mitigation measures. PMID:25437763

  17. Trends in emissions and concentrations of air pollutants in the lower troposphere in the Baltimore/Washington airshed from 1997 to 2011

    NASA Astrophysics Data System (ADS)

    He, H.; Stehr, J. W.; Hains, J. C.; Krask, D. J.; Doddridge, B. G.; Vinnikov, K. Y.; Canty, T. P.; Hosley, K. M.; Salawitch, R. J.; Worden, H. M.; Dickerson, R. R.

    2013-02-01

    Trends in the composition of the lower atmosphere (0-1500 m altitude) and surface air quality over the Baltimore/Washington area and surrounding states were investigated for the period from 1997 to 2011. We examined emissions, ground-level observations and long-term aircraft measurements to characterize trends in air pollution. The USEPA Continuous Emissions Monitoring System (CEMS) program reported substantial decreases in point sources resulting from national and regional control measures; these decreases are definitely reflected in the ground-level observations. The decreasing trend of CO column contents is ~8.0 Dobson Unit (DU) decade-1, corresponding to ~350 ppbv decade-1 in the lower troposphere. Satellite observations of long-term, near-surface CO show ~40% decrease over western Maryland between 2000 and 2011, the same magnitude as indicated by aircraft measurements over upwind regions of Baltimore/Washington aished. After compensating for inter-annual temperature variations, historical aircraft measurements suggest the daily net production of tropospheric ozone over Baltimore/Washington area decreases from ~20 ppbv in the late 1990s to ~7 ppbv in the early 2010s during the ozone season. A decrease in the long-term ozone column content is observed as ~2.0 DU decade-1 in the lowest 1500 m, corresponding to ~13 ppbv decade-1 decrease. Back trajectory cluster analysis demonstrates that emissions of air pollutants from Ohio and Pennsylvania through Maryland influence column contents of downwind ozone in the lower atmosphere. The trends of air pollutants reveal the success of regulations implemented over the last decade and the importance of region wide emission controls over the eastern United States.

  18. Source Resolution and Risk Apportionment of Air Emission Sources in AN Industrial Complex for Risk Reduction Considerations: AN Air Waste Management Methodology.

    NASA Astrophysics Data System (ADS)

    Mukerjee, Shaibal

    The purpose of this study was to develop an air waste management methodology for apportioning the health risks associated with air emission source categories that are identified in a given airshed. This was implemented by expanding the receptor model technique to assess the non-carcinogenic and carcinogenic inhalation risks to an exposed population for certain element pollutants determined to be coming from specific emission sources. The concept was demonstrated using air quality data from a mid-sized industrial complex located in a rural/residential area. It was demonstrated that risks from identified, major elemental emission categories can be quantified and that a total, additive risk be determined for main source categories in the airshed. Potential risk reduction measures were targeted at main risk sources without arbitrarily reducing risk for all sources in the airshed thereby making it a cost-effective approach. Dispersion modeling was utilized from previous emission inventory data so that risk estimates for these sources could be modeled at other receptor points in the airshed. The factor analytic procedure for Source Resolution in the initial receptor modeling approach was used to show whether the ambient data fitted a Maximum-Likelihood Factor Analysis or Principal Component Analysis for identifying underlying emission sources. It was also shown how Maximum -Likelihood Factor Analysis can be a stronger source resolution procedure as opposed to Principal Component Analysis since Factor Analysis is metrically invariant. Finally, the use of the ambient air data for total particulates was used to expand the Source Resolution and Risk Apportionment concepts to augment the Bubble Policy currently used in Air Quality Management.

  19. The challenges on uncertainty analysis for pebble bed HTGR

    SciTech Connect

    Hao, C.; Li, F.; Zhang, H.

    2012-07-01

    The uncertainty analysis is very popular and important, and many works have been done for Light Water Reactor (LWR), although the experience for the uncertainty analysis in High Temperature Gas cooled Reactor (HTGR) modeling is still in the primary stage. IAEA will launch a Coordination Research Project (CRP) on this topic soon. This paper addresses some challenges for the uncertainty analysis in HTGR modeling, based on the experience of OECD LWR Uncertainty Analysis in Modeling (UAM) activities, and taking into account the peculiarities of pebble bed HTGR designs. The main challenges for HTGR UAM are: the lack of experience, the totally different code packages, the coupling of power distribution, temperature distribution and burnup distribution through the temperature feedback and pebble flow. The most serious challenge is how to deal with the uncertainty in pebble flow, the uncertainty in pebble bed flow modeling, and their contribution to the uncertainty of maximum fuel temperature, which is the most interested parameter for the modular HTGR. (authors)

  20. Genetic Algorithms and Nucleation in VIH-AIDS transition.

    NASA Astrophysics Data System (ADS)

    Barranon, Armando

    2003-03-01

    VIH to AIDS transition has been modeled via a genetic algorithm that uses boom-boom principle and where population evolution is simulated with a cellular automaton based on SIR model. VIH to AIDS transition is signed by nucleation of infected cells and low probability of infection are obtained for different mutation rates in agreement with clinical results. A power law is obtained with a critical exponent close to the critical exponent of cubic, spherical percolation, colossal magnetic resonance, Ising Model and liquid-gas phase transition in heavy ion collisions. Computations were carried out at UAM-A Supercomputing Lab and author acknowledges financial support from Division of CBI at UAM-A.

  1. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  2. Deformation Mechanisms in NiTi-Al Composites Fabricated by Ultrasonic Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Hehr, Adam; Dapino, Marcelo J.; Anderson, Peter M.

    2015-09-01

    Thermally active NiTi shape memory alloy (SMA) fibers can be used to tune or tailor the effective coefficient of thermal expansion (CTE) of a metallic matrix composite. In this paper, a novel NiTi-Al composite is fabricated using ultrasonic additive manufacturing (UAM). A combined experimental-simulation approach is used to develop and validate a microstructurally based finite element model of the composite. The simulations are able to closely reproduce the macroscopic strain versus temperature cyclic response, including initial transient effects in the first cycle. They also show that the composite CTE is minimized if the austenite texture in the SMA wires is <001>B2, that a fiber aspect ratio >10 maximizes fiber efficiency, and that the UAM process may reduce hysteresis in embedded SMA wires.

  3. Models and role models.

    PubMed

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. PMID:25871413

  4. Global Gene Expression in Staphylococcus aureus Biofilms

    PubMed Central

    Beenken, Karen E.; Dunman, Paul M.; McAleese, Fionnuala; Macapagal, Daphne; Murphy, Ellen; Projan, Steven J.; Blevins, Jon S.; Smeltzer, Mark S.

    2004-01-01

    We previously demonstrated that mutation of the staphylococcal accessory regulator (sarA) in a clinical isolate of Staphylococcus aureus (UAMS-1) results in an impaired capacity to form a biofilm in vitro (K. E. Beenken, J. S. Blevins, and M. S. Smeltzer, Infect. Immun. 71:4206-4211, 2003). In this report, we used a murine model of catheter-based biofilm formation to demonstrate that a UAMS-1 sarA mutant also has a reduced capacity to form a biofilm in vivo. Surprisingly, mutation of the UAMS-1 ica locus had little impact on biofilm formation in vitro or in vivo. In an effort to identify additional loci that might be relevant to biofilm formation and/or the adaptive response required for persistence of S. aureus within a biofilm, we isolated total cellular RNA from UAMS-1 harvested from a biofilm grown in a flow cell and compared the transcriptional profile of this RNA to RNA isolated from both exponential- and stationary-phase planktonic cultures. Comparisons were done using a custom-made Affymetrix GeneChip representing the genomic complement of six strains of S. aureus (COL, N315, Mu50, NCTC 8325, EMRSA-16 [strain 252], and MSSA-476). The results confirm that the sessile lifestyle associated with persistence within a biofilm is distinct by comparison to the lifestyles of both the exponential and postexponential phases of planktonic culture. Indeed, we identified 48 genes in which expression was induced at least twofold in biofilms over expression under both planktonic conditions. Similarly, we identified 84 genes in which expression was repressed by a factor of at least 2 compared to expression under both planktonic conditions. A primary theme that emerged from the analysis of these genes is that persistence within a biofilm requires an adaptive response that limits the deleterious effects of the reduced pH associated with anaerobic growth conditions. PMID:15231800

  5. MODEL DEVELOPMENT - DOSE MODELS

    EPA Science Inventory

    Model Development

    Humans are exposed to mixtures of chemicals from multiple pathways and routes. These exposures may result from a single event or may accumulate over time if multiple exposure events occur. The traditional approach of assessing risk from a single chemica...

  6. Promoting Models

    NASA Astrophysics Data System (ADS)

    Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si

    There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.

  7. Models, Part IV: Inquiry Models.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2002-01-01

    Discusses models for information skills that include inquiry-oriented activities. Highlights include WebQuest, which uses Internet resources supplemented with videoconferencing; Minnesota's Inquiry Process based on the Big Six model for information problem-solving; Indiana's Student Inquiry Model; constructivist learning models for inquiry; and…

  8. Supermatrix models

    SciTech Connect

    Yost, S.A.

    1991-05-01

    Radom matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described.

  9. Supermatrix models

    SciTech Connect

    Yost, S.A. . Dept. of Physics and Astronomy)

    1992-09-30

    In this paper, random matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two-component plasma in one dimension. A stationary point of the model is described.

  10. MODELS - 3

    EPA Science Inventory

    Models-3 is a third generation air quality modeling system that contains a variety of tools to perform research and analysis of critical environmental questions and problems. These tools provide regulatory analysts and scientists with quicker results, greater scientific accuracy ...

  11. ENTRAINMENT MODELS

    EPA Science Inventory

    This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...

  12. Turbulence modeling

    NASA Technical Reports Server (NTRS)

    Rubesin, Morris W.

    1987-01-01

    Recent developments at several levels of statistical turbulence modeling applicable to aerodynamics are briefly surveyed. Emphasis is on examples of model improvements for transonic, two-dimensional flows. Experience with the development of these improved models is cited to suggest methods of accelerating the modeling process necessary to keep abreast of the rapid movement of computational fluid dynamics into the computation of complex three-dimensional flows.

  13. Waveguide model

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A model is presented which quantifies the electromagnetic modes (field configurations) in the immediate vicinity of the rectenna element. Specifically, the waveguide model characterizes the electromagnetic modes generated by planar waves normal to the array. The model applies only to incidence normal to the array.

  14. Phoenix model

    EPA Science Inventory

    Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...

  15. Radiation Models

    ERIC Educational Resources Information Center

    James, W. G. G.

    1970-01-01

    Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

  16. Hydrological models are mediating models

    NASA Astrophysics Data System (ADS)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  17. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  18. Model Reduction in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Yeh, W. W. G.

    2014-12-01

    Model reduction has been shown to be a very effective method for reducing the computational burden of large-scale simulations. Model reduction techniques preserve much of the physical knowledge of the system and primarily seek to remove components from the model that do not provide significant information of interest. Proper Orthogonal Decomposition (POD) is a model reduction technique by which a system of ordinary equations is projected onto a much smaller subspace in such a way that the span of the subspace is equal to the span of the original full model space. Basically, the POD technique selects a small number of orthonormal basis functions (principal components) that span the spatial variability of the solutions. In this way the state variable (head) is approximated by a linear combination of these basis functions and, using a Galerkin projection, the dimension of the problem is significantly reduced. It has been shown that for a highly discritized model, the reduced model can be two to three orders of magnitude smaller than the original model and runs 1,000 faster. More importantly, the reduced model captures the dominating characteristics of the full model and produces sufficiently accurate solutions. One of the major tasks in the development of the reduced model is the selection of snapshots which are used to determine the dominant eigenvectors. This paper discusses ways to optimize the snapshot selection. Additionally, the paper also discusses applications of the reduced model to parameter estimation, Monte Carlo simulation and experimental design in groundwater modeling.

  19. Modeling Pharmacokinetics.

    PubMed

    Bois, Frederic Y; Brochot, Céline

    2016-01-01

    Pharmacokinetics is the study of the fate of xenobiotics in a living organism. Physiologically based pharmacokinetic (PBPK) models provide realistic descriptions of xenobiotics' absorption, distribution, metabolism, and excretion processes. They model the body as a set of homogeneous compartments representing organs, and their parameters refer to anatomical, physiological, biochemical, and physicochemical entities. They offer a quantitative mechanistic framework to understand and simulate the time-course of the concentration of a substance in various organs and body fluids. These models are well suited for performing extrapolations inherent to toxicology and pharmacology (e.g., between species or doses) and for integrating data obtained from various sources (e.g., in vitro or in vivo experiments, structure-activity models). In this chapter, we describe the practical development and basic use of a PBPK model from model building to model simulations, through implementation with an easily accessible free software. PMID:27311461

  20. Ventilation Model

    SciTech Connect

    H. Yang

    1999-11-04

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future.

  1. Climate Models

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  2. Phenomenological models

    SciTech Connect

    Braby, L.A.

    1990-09-01

    The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. A range of models covering different endpoints and phenomena has developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. 43 refs., 13 figs.

  3. Building models

    SciTech Connect

    Burr, M.T.

    1995-04-01

    As developers make progress on independent power projects around the world, models for success are beginning to emerge. Different models are evolving to create ownership structures that accomoate a complex system of regulatory requirements. Other frameworks make use of previously untapped fuel resources, or establish new sources of financing; however, not all models may be applied to a given project. This article explores how developers are finding new alternatives for overcoming development challenges that are common to projects in many countries.

  4. Ventilation Model

    SciTech Connect

    V. Chipman

    2002-10-05

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post

  5. Model Selection for Geostatistical Models

    SciTech Connect

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  6. Turbulence modeling

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge E.

    1995-01-01

    The objective of this work is to develop, verify, and incorporate the baseline two-equation turbulence models which account for the effects of compressibility into the three-dimensional Reynolds averaged Navier-Stokes (RANS) code and to provide documented descriptions of the models and their numerical procedures so that they can be implemented into 3-D CFD codes for engineering applications.

  7. Budget Model.

    ERIC Educational Resources Information Center

    Washington State Board for Community Coll. Education, Olympia.

    Computerized formula-driven budget models are used by the Washington community college system to define resource needs for legislative budget requests and to distribute legislative appropriations among 22 community college districts. This manual outlines the sources of information needed to operate the model and illustrates the principles on which…

  8. Dispersion Modeling.

    ERIC Educational Resources Information Center

    Budiansky, Stephen

    1980-01-01

    This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)

  9. Modeling Sunspots

    ERIC Educational Resources Information Center

    Oh, Phil Seok; Oh, Sung Jin

    2013-01-01

    Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…

  10. Phonological Models.

    ERIC Educational Resources Information Center

    Ballard, W.L.

    1968-01-01

    The article discusses models of synchronic and diachronic phonology and suggests changes in them. The basic generative model of phonology is outlined with the author's reinterpretations. The systematic phonemic level is questioned in terms of its unreality with respect to linguistic performance and its lack of validity with respect to historical…

  11. Zitterbewegung modeling

    SciTech Connect

    Hestenes, D. )

    1993-03-01

    Guidelines for constructing point particle models of the electron with [ital zitterbewegung] and other features of the Dirac theory are discussed. Such models may at least be useful approximations to the Dirac theory, but the more exciting possibility is that this approach may lead to a more fundamental reality. 6 refs.

  12. The ionospheric current system and its contribution to the Earth's magnetic field

    NASA Astrophysics Data System (ADS)

    Prokhorov, Boris E.; Förster, Matthias; Stolle, Claudia; Lesur, Vincent; Namgalagze, Alexander A.; Holschneider, Matthias

    2016-04-01

    The ionospheric currents are a highly variable part of the coupled Magnetosphere - Ionosphere - Thermosphere (MIT) system. This system is driven by the solar wind and Interplanetary Magnetic Field (IMF) interaction with the Earth's magnetosphere. The solar wind and IMF interactions transfer energy to the MIT system via reconnection processes at the magnetopause. The Field Aligned Currents (FACs) constitute the energetic link between the magnetosphere and the Earth's ionosphere. The system of ionospheric currents depends on the geomagnetic conditions and has significant seasonal and UT variation. The first aim of the present investigation is to model the global dynamic ionospheric current system. For this purpose, we use an improved version of the first-principle, time-dependent, and fully self-consistent numerical global Upper Atmosphere Model (UAM-P). This model describes the thermosphere, ionosphere, plasmasphere and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii. For this study, the lower latitudinal and equatorial electrodynamics of the UAM-P model was improved. The second aim of this research is to calculate the ionospheric contribution to the Earth's magnetic field. The additional magnetic field is obtained from the global ionospheric current system calculated with the UAM-P model. The ionospheric magnetic field is calculated using the Biot-Savart law. The maximum magnitudes of the ionospheric magnetic field are located close to the areas of the auroral and equatorial electrojets. The contribution of the equatorial electrojet to the magnetic field is significant and comparable to the influence of the high latitude current system.

  13. OSPREY Model

    SciTech Connect

    Veronica J. Rutledge

    2013-01-01

    The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to

  14. UNMIX modeling of ambient PM2.5 near an interstate highway in Cincinnati, OH, USA

    PubMed Central

    Hu, Shaohua; McDonald, Rafael; Martuzevicius, Dainius; Biswas, Pratim; Grinshpun, Sergey A.; Kelley, Anna; Reponen, Tiina; Lockey, James; LeMasters, Grace

    2008-01-01

    The “Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS)” is underway to determine if infants who are exposed to diesel engine exhaust particles are at an increased risk for atopy and atopic respiratory disorders, and to determine if this effect is magnified in a genetically at risk population. In support of this study, a methodology has been developed to allocate local traffic source contributions to ambient PM2.5 in the Cincinnati airshed. As a first step towards this allocation, UNMIX was used to generate factors for ambient PM2.5 at two sites near at interstate highway. Procedures adopted to collect, analyze and prepare the data sets to run UNMIX are described. The factors attributed to traffic sources were similar for the two sites. These factors were also similar to locally measured truck engine-exhaust enriched ambient profiles. The temporal variation of the factors was analyzed with clear differences observed between factors attributed to traffic sources and combustion-related regional secondary sources. PMID:21720518

  15. Phenomenological models.

    PubMed

    Braby, L A

    1991-01-01

    The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions which are modified by characteristics of the radiation, the timing of its administration, the chemical and physical environment, and the nature of the biological system. However, it is generally agreed that the health effects in animals originate from changes in individual cells, or possibly small groups of cells, and that these cellular changes are initiated by ionizations and excitations produced by the passage of charged particles through the cells. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. Different phenomena (LET dependence, dose rate effect, oxygen effect etc.) and different end points (cell survival, aberration formation, transformation, etc.) have been observed, and no single model has been developed to cover all of them. Instead, a range of models covering different end points and phenomena have developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. PMID:1811477

  16. Stereometric Modelling

    NASA Astrophysics Data System (ADS)

    Grimaldi, P.

    2012-07-01

    These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  17. Model hydrographs

    USGS Publications Warehouse

    Mitchell, W.D.

    1972-01-01

    Model hydrographs are composed of pairs of dimensionless ratios, arrayed in tabular form, which, when modified by the appropriate values of rainfall exceed and by the time and areal characteristics of the drainage basin, satisfactorily represent the flood hydrograph for the basin. Model bydrographs are developed from a dimensionless translation hydrograph, having a time base of T hours and appropriately modified for storm duration by routing through reservoir storage, S=kOx. Models fall into two distinct classes: (1) those for which the value of x is unity and which have all the characteristics of true unit hydrographs and (2) those for which the value of x is other than unity and to which the unit-hydrograph principles of proportionality and superposition do not apply. Twenty-six families of linear models and eight families of nonlinear models in tabular form from the principal subject of this report. Supplemental discussions describe the development of the models and illustrate their application. Other sections of the report, supplemental to the tables, describe methods of determining the hydrograph characteristics, T, k, and x, both from observed hydrograph and from the physical characteristics of the drainage basin. Five illustrative examples of use show that the models, when properly converted to incorporate actual rainfall excess and the time and areal characteristics of the drainage basins, do indeed satisfactorily represent the observed flood hydrographs for the basins.

  18. Programming models

    SciTech Connect

    Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim

    2011-01-14

    A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

  19. Energy Models

    EPA Science Inventory

    Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...

  20. Social Network Analysis of Biomedical Research Collaboration Networks in a CTSA Institution

    PubMed Central

    Bian, Jiang; Xie, Mengjun; Topaloglu, Umit; Hudson, Teresa; Eswaran, Hari; Hogan, William

    2014-01-01

    BACKGROUND The popularity of social networks has triggered a number of research efforts on network analyses of research collaborations in the Clinical and Translational Science Award (CTSA) community. Those studies mainly focus on the general understanding of collaboration networks by measuring common network metrics. More fundamental questions about collaborations still remain unanswered such as recognizing “influential” nodes and identifying potential new collaborations that are most rewarding. METHODS We analyzed biomedical research collaboration networks (RCNs) constructed from a dataset of research grants collected at a CTSA institution (i.e. University of Arkansas for Medical Sciences (UAMS)) in a comprehensive and systematic manner. First, our analysis covers the full spectrum of a RCN study: from network modeling to network characteristics measurement, from key nodes recognition to potential links (collaborations) suggestion. Second, our analysis employs non-conventional model and techniques including a weighted network model for representing collaboration strength, rank aggregation for detecting important nodes, and Random Walk with Restart (RWR) for suggesting new research collaborations. RESULTS By applying our models and techniques to RCNs at UAMS prior to and after the CTSA, we have gained valuable insights that not only reveal the temporal evolution of the network dynamics but also assess the effectiveness of the CTSA and its impact on a research institution. We find that collaboration networks at UAMS are not scale-free but small-world. Quantitative measures have been obtained to evident that the RCNs at UAMS are moving towards favoring multidisciplinary research. Moreover, our link prediction model creates the basis of collaboration recommendations with an impressive accuracy (AUC: 0.990, MAP@3: 1.48 and MAP@5: 1.522). Last but not least, an open-source visual analytical tool for RCNs is being developed and released through Github. CONCLUSIONS

  1. Mechanistic models

    SciTech Connect

    Curtis, S.B.

    1990-09-01

    Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) ``interaction`` of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

  2. Mechanistic models

    SciTech Connect

    Curtis, S.B.

    1990-09-01

    Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) interaction'' of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

  3. Modeling reality

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.

  4. Molecular Modeling

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.

    1999-06-01

    Molecular modeling has trickled down from the realm of pharmaceutical and research laboratories into the realm of undergraduate chemistry instruction. It has opened avenues for the visualization of chemical concepts that previously were difficult or impossible to convey. I am sure that many of you have developed exercises using the various molecular modeling tools. It is the desire of this Journal to become an avenue for you to share these exercises among your colleagues. It is to this end that Ron Starkey has agreed to edit such a column and to publish not only the description of such exercises, but also the software documents they use. The WWW is the obvious medium to distribute this combination and so accepted submissions will appear online as a feature of JCE Internet. Typical molecular modeling exercise: finding conformation energies. Molecular Modeling Exercises and Experiments is the latest feature column of JCE Internet, joining Conceptual Questions and Challenge Problems, Hal's Picks, and Mathcad in the Chemistry Curriculum. JCE Internet continues to seek submissions in these areas of interest and submissions of general interest. If you have developed materials and would like to submit them, please see our Guide to Submissions for more information. The Chemical Education Resource Shelf, Equipment Buyers Guide, and WWW Site Review would also like to hear about chemistry textbooks and software, equipment, and WWW sites, respectively. Please consult JCE Internet Features to learn more about these resources at JCE Online. Email Announcements Would you like to be informed by email when the latest issue of the Journal is available online? when a new JCE Software title is shipping? when a new JCE Internet article has been published or is available for Open Review? when your subscription is about to expire? A new feature of JCE Online makes this possible. Visit our Guestbook to learn how. When

  5. Supernova models

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1980-01-01

    Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the /sup 56/Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed.

  6. Atmospheric Modeling

    EPA Science Inventory

    Although air quality models have been applied historically to address issues specific to ambient air quality standards (i.e., one criteria pollutant at a time) or welfare (e.g.. acid deposition or visibility impairment). they are inherently multipollutant based. Therefore. in pri...

  7. Ensemble Models

    EPA Science Inventory

    Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...

  8. Modeling Convection

    ERIC Educational Resources Information Center

    Ebert, James R.; Elliott, Nancy A.; Hurteau, Laura; Schulz, Amanda

    2004-01-01

    Students must understand the fundamental process of convection before they can grasp a wide variety of Earth processes, many of which may seem abstract because of the scales on which they operate. Presentation of a very visual, concrete model prior to instruction on these topics may facilitate students' understanding of processes that are largely…

  9. Painting models

    NASA Astrophysics Data System (ADS)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  10. Entrepreneurship Models.

    ERIC Educational Resources Information Center

    Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.

    This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…

  11. Modeling Lessons

    ERIC Educational Resources Information Center

    Casey, Katherine

    2011-01-01

    As teachers learn new pedagogical strategies, they crave explicit demonstrations that show them how the new strategies will work with their students in their classrooms. Successful instructional coaches, therefore, understand the importance of modeling lessons to help teachers develop a vision of effective instruction. The author, an experienced…

  12. Modeling Muscles

    ERIC Educational Resources Information Center

    Goodwyn, Lauren; Salm, Sarah

    2007-01-01

    Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…

  13. Criticality Model

    SciTech Connect

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  14. Ultrasonic Additive Manufacturing: Weld Optimization for Aluminum 6061, Development of Scarf Joints for Aluminum Sheet Metal, and Joining of High Strength Metals

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.

    Ultrasonic additive manufacturing (UAM) is a low temperature, solid-state manufacturing process that enables the creation of layered, solid metal structures with designed anisotropies and embedded materials. As a low temperature process, UAM enables the creation of active composites containing smart materials, components with embedded sensors, thermal management devices, and many others. The focus of this work is on the improvement and characterization of UAM aluminum structures, advancing the capabilities of ultrasonic joining into sheet geometries, and examination of dissimilar material joints using the technology. Optimized process parameters for Al 6061 were identified via a design of experiments study indicating a weld amplitude of 32.8 synum and a weld speed of 200 in/min as optimal. Weld force and temperature were not significant within the levels studied. A methodology of creating large scale builds is proposed, including a prescribed random stacking sequence and overlap of 0.0035 in. (0.0889 mm) for foils to minimize voids and maximize mechanical strength. Utilization of heat treatments is shown to significantly increase mechanical properties of UAM builds, within 90% of bulk material. The applied loads during the UAM process were investigated to determine the stress fields and plastic deformation induced during the process. Modeling of the contact mechanics via Hertzian contact equations shows that significant stress is applied via sonotrode contact in the process. Contact modeling using finite element analysis (FEA), including plasticity, indicates that 5000 N normal loads result in plastic deformation in bulk aluminum foil, while at 3000 N no plastic deformation occurs. FEA studies on the applied loads during the process, specifically a 3000 N normal force and 2000 N shear force, show that high stresses and plastic deformation occur at the edges of a welded foil, and base of the UAM build. Microstructural investigations of heat treated foils confirms

  15. Modeling Molecules

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The molecule modeling method known as Multibody Order (N) Dynamics, or MBO(N)D, was developed by Moldyn, Inc. at Goddard Space Flight Center through funding provided by the SBIR program. The software can model the dynamics of molecules through technology which stimulates low-frequency molecular motions and properties, such as movements among a molecule's constituent parts. With MBO(N)D, a molecule is substructured into a set of interconnected rigid and flexible bodies. These bodies replace the computation burden of mapping individual atoms. Moldyn's technology cuts computation time while increasing accuracy. The MBO(N)D technology is available as Insight II 97.0 from Molecular Simulations, Inc. Currently the technology is used to account for forces on spacecraft parts and to perform molecular analyses for pharmaceutical purposes. It permits the solution of molecular dynamics problems on a moderate workstation, as opposed to on a supercomputer.

  16. Dendrite Model

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dr. Donald Gilles, the Discipline Scientist for Materials Science in NASA's Microgravity Materials Science and Applications Department, demonstrates to Carl Dohrman a model of dendrites, the branch-like structures found in many metals and alloys. Dohrman was recently selected by the American Society for Metals International as their 1999 ASM International Foundation National Merit Scholar. The University of Illinois at Urbana-Champaign freshman recently toured NASA's materials science facilities at the Marshall Space Flight Center.

  17. Contribution of the nos-pdt operon to virulence phenotypes in methicillin-sensitive Staphylococcus aureus.

    PubMed

    Sapp, April M; Mogen, Austin B; Almand, Erin A; Rivera, Frances E; Shaw, Lindsey N; Richardson, Anthony R; Rice, Kelly C

    2014-01-01

    Nitric oxide (NO) is emerging as an important regulator of bacterial stress resistance, biofilm development, and virulence. One potential source of endogenous NO production in the pathogen Staphylococcus aureus is its NO-synthase (saNOS) enzyme, encoded by the nos gene. Although a role for saNOS in oxidative stress resistance, antibiotic resistance, and virulence has been recently-described, insights into the regulation of nos expression and saNOS enzyme activity remain elusive. To this end, transcriptional analysis of the nos gene in S. aureus strain UAMS-1 was performed, which revealed that nos expression increases during low-oxygen growth and is growth-phase dependent. Furthermore, nos is co-transcribed with a downstream gene, designated pdt, which encodes a prephenate dehydratase (PDT) enzyme involved in phenylalanine biosynthesis. Deletion of pdt significantly impaired the ability of UAMS-1 to grow in chemically-defined media lacking phenylalanine, confirming the function of this enzyme. Bioinformatics analysis revealed that the operon organization of nos-pdt appears to be unique to the staphylococci. As described for other S. aureus nos mutants, inactivation of nos in UAMS-1 conferred sensitivity to oxidative stress, while deletion of pdt did not affect this phenotype. The nos mutant also displayed reduced virulence in a murine sepsis infection model, and increased carotenoid pigmentation when cultured on agar plates, both previously-undescribed nos mutant phenotypes. Utilizing the fluorescent stain 4-Amino-5-Methylamino-2',7'-Difluorofluorescein (DAF-FM) diacetate, decreased levels of intracellular NO/reactive nitrogen species (RNS) were detected in the nos mutant on agar plates. These results reinforce the important role of saNOS in S. aureus physiology and virulence, and have identified an in vitro growth condition under which saNOS activity appears to be upregulated. However, the significance of the operon organization of nos-pdt and potential

  18. Contribution of the nos-pdt Operon to Virulence Phenotypes in Methicillin-Sensitive Staphylococcus aureus

    PubMed Central

    Almand, Erin A.; Rivera, Frances E.; Shaw, Lindsey N.; Richardson, Anthony R.; Rice, Kelly C.

    2014-01-01

    Nitric oxide (NO) is emerging as an important regulator of bacterial stress resistance, biofilm development, and virulence. One potential source of endogenous NO production in the pathogen Staphylococcus aureus is its NO-synthase (saNOS) enzyme, encoded by the nos gene. Although a role for saNOS in oxidative stress resistance, antibiotic resistance, and virulence has been recently-described, insights into the regulation of nos expression and saNOS enzyme activity remain elusive. To this end, transcriptional analysis of the nos gene in S. aureus strain UAMS-1 was performed, which revealed that nos expression increases during low-oxygen growth and is growth-phase dependent. Furthermore, nos is co-transcribed with a downstream gene, designated pdt, which encodes a prephenate dehydratase (PDT) enzyme involved in phenylalanine biosynthesis. Deletion of pdt significantly impaired the ability of UAMS-1 to grow in chemically-defined media lacking phenylalanine, confirming the function of this enzyme. Bioinformatics analysis revealed that the operon organization of nos-pdt appears to be unique to the staphylococci. As described for other S. aureus nos mutants, inactivation of nos in UAMS-1 conferred sensitivity to oxidative stress, while deletion of pdt did not affect this phenotype. The nos mutant also displayed reduced virulence in a murine sepsis infection model, and increased carotenoid pigmentation when cultured on agar plates, both previously-undescribed nos mutant phenotypes. Utilizing the fluorescent stain 4-Amino-5-Methylamino-2',7'-Difluorofluorescein (DAF-FM) diacetate, decreased levels of intracellular NO/reactive nitrogen species (RNS) were detected in the nos mutant on agar plates. These results reinforce the important role of saNOS in S. aureus physiology and virulence, and have identified an in vitro growth condition under which saNOS activity appears to be upregulated. However, the significance of the operon organization of nos-pdt and potential

  19. Model checking

    NASA Technical Reports Server (NTRS)

    Dill, David L.

    1995-01-01

    Automatic formal verification methods for finite-state systems, also known as model-checking, successfully reduce labor costs since they are mostly automatic. Model checkers explicitly or implicitly enumerate the reachable state space of a system, whose behavior is described implicitly, perhaps by a program or a collection of finite automata. Simple properties, such as mutual exclusion or absence of deadlock, can be checked by inspecting individual states. More complex properties, such as lack of starvation, require search for cycles in the state graph with particular properties. Specifications to be checked may consist of built-in properties, such as deadlock or 'unspecified receptions' of messages, another program or implicit description, to be compared with a simulation, bisimulation, or language inclusion relation, or an assertion in one of several temporal logics. Finite-state verification tools are beginning to have a significant impact in commercial designs. There are many success stories of verification tools finding bugs in protocols or hardware controllers. In some cases, these tools have been incorporated into design methodology. Research in finite-state verification has been advancing rapidly, and is showing no signs of slowing down. Recent results include probabilistic algorithms for verification, exploitation of symmetry and independent events, and the use symbolic representations for Boolean functions and systems of linear inequalities. One of the most exciting areas for further research is the combination of model-checking with theorem-proving methods.

  20. Modeling biomembranes.

    SciTech Connect

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  1. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  2. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  3. Biomimetic modelling.

    PubMed Central

    Vincent, Julian F V

    2003-01-01

    Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more complete and certain understanding and the possibility of further revelations for application in engineering. This is a pathway as yet unformalized, and one that offers the possibility that engineers can also be scientists. PMID:14561351

  4. Modeling fatigue.

    PubMed Central

    Sumner, Walton; Xu, Jin Zhong

    2002-01-01

    The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms. PMID:12463924

  5. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  6. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  7. Stochastic sampling method with MCNPX for nuclear data uncertainty propagation in criticality safety applications

    SciTech Connect

    Zhu, T.; Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.

    2012-07-01

    In the domain of criticality safety, the efficient propagation of uncertainty in nuclear data to uncertainty in k{sub eff} is an important area of current research. In this paper, a method based on stochastic sampling is presented for uncertainty propagation in MCNPX calculations. To that aim, the nuclear data (i.e. cross sections) are assumed to have a multivariate normal distribution and simple random sampling is performed following this presumed probability distribution. A verification of the developed stochastic sampling procedure with MCNPX is then conducted using the {sup 239}Pu Jezebel experiment as well as the PB-2 BWR and TMI-1 PWR pin cell models from the Uncertainty Analysis in Modeling (UAM) exercises. For the Jezebel case, it is found that the developed stochastic sampling approach predicts similar k{sub eff} uncertainties compared to conventional sensitivity and uncertainty methods. For the UAM models, slightly lower uncertainties are obtained when comparing to existing preliminary results. Further details of these verification studies are discussed and directions for future work are outlined. (authors)

  8. Skylab Model

    NASA Technical Reports Server (NTRS)

    1967-01-01

    This photograph is of a model of the Skylab with the Command/Service Module being docked. In an early effort to extend the use of Apollo for further applications, NASA established the Apollo Applications Program (AAP) in August of 1965. The AAP was to include long duration Earth orbital missions during which astronauts would carry out scientific, technological, and engineering experiments in space by utilizing modified Saturn launch vehicles and the Apollo spacecraft. Established in 1970, the Skylab Program was the forerurner of the AAP. The goals of the Skylab were to enrich our scientific knowledge of the Earth, the Sun, the stars, and cosmic space; to study the effects of weightlessness on living organisms, including man; to study the effects of the processing and manufacturing of materials utilizing the absence of gravity; and to conduct Earth resource observations. The Skylab also conducted 19 selected experiments submitted by high school students. Skylab's 3 different 3-man crews spent up to 84 days in Earth orbit. The Marshall Space Flight Center (MSFC) had responsibility for developing and integrating most of the major components of the Skylab: the Orbital Workshop (OWS), Airlock Module (AM), Multiple Docking Adapter (MDA), Apollo Telescope Mount (ATM), Payload Shroud (PS), and most of the experiments. MSFC was also responsible for providing the Saturn IB launch vehicles for three Apollo spacecraft and crews and a Saturn V launch vehicle for the Skylab.

  9. CISNET lung models: Comparison of model assumptions and model structures

    PubMed Central

    McMahon, Pamela M.; Hazelton, William; Kimmel, Marek; Clarke, Lauren

    2012-01-01

    Sophisticated modeling techniques can be powerful tools to help us understand the effects of cancer control interventions on population trends in cancer incidence and mortality. Readers of journal articles are however rarely supplied with modeling details. Six modeling groups collaborated as part of the National Cancer Institute’s Cancer Intervention and Surveillance Modeling Network (CISNET) to investigate the contribution of US tobacco control efforts towards reducing lung cancer deaths over the period 1975 to 2000. The models included in this monograph were developed independently and use distinct, complementary approaches towards modeling the natural history of lung cancer. The models used the same data for inputs and agreed on the design of the analysis and the outcome measures. This article highlights aspects of the models that are most relevant to similarities of or differences between the results. Structured comparisons can increase the transparency of these complex models. PMID:22882887

  10. TRANSIMS environmental module

    SciTech Connect

    Williams, M.D.; Thayer, G.R.; Brown, M.J.

    1996-09-01

    The purpose of the environmental module is to translate traveler behavior into consequent air quality, energy consumption, watershed nitrate deposition, and carbon dioxide emissions. The TRANSIMS environmental module is composed of a system of environmental modules which can describe both the average conditions and the fluctuations about the averages. It uses a prognostic meteorological model, HOTMAC, to describe the atmospheric conditions. The environmental module will use modal emissions models to define the emissions. Transport and dispersion of conservative pollutants will be described with a Monte-Carlo Kernel model (RAPTAD). Air chemistry will be described by an airshed model with the current choice being the CIT model developed at the California Institute of Technology and the Carnegie Mellon Institute of Technology.

  11. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  12. I&C Modeling in SPAR Models

    SciTech Connect

    John A. Schroeder

    2012-06-01

    The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.

  13. Comparative Protein Structure Modeling Using Modeller

    PubMed Central

    Eswar, Narayanan; Marti-Renom, Marc A.; Madhusudhan, M.S.; Eramian, David; Shen, Min-yi; Pieper, Ursula

    2014-01-01

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:18428767

  14. Forward model nonlinearity versus inverse model nonlinearity

    USGS Publications Warehouse

    Mehl, S.

    2007-01-01

    The issue of concern is the impact of forward model nonlinearity on the nonlinearity of the inverse model. The question posed is, "Does increased nonlinearity in the head solution (forward model) always result in increased nonlinearity in the inverse solution (estimation of hydraulic conductivity)?" It is shown that the two nonlinearities are separate, and it is not universally true that increased forward model nonlinearity increases inverse model nonlinearity. ?? 2007 National Ground Water Association.

  15. Mexico City Air quality: Progress of an international collaborative project to define air quality management options

    NASA Astrophysics Data System (ADS)

    Streit, Gerald E.; Guzmán, Francisco

    The Mexico City Air Quality Research Initiative was a 3-yr international collaborative project to develop or adapt a set of air quality management decision analysis tools for Mexico City and make them available to Mexican policy makers. The project comprised three tasks: modeling and simulation, characterization and measurement, and strategic evaluation. A prognostic, mesoscale meteorological model was adapted to the region of Mexico City and linked to a 3-D airshed model. These were extensively tested against data from the air quality monitoring network and from three intensive field campaigns. The interaction between policy and science was promoted through the development of a formal multiattribute decision analysis model to evaluate alternative control strategies. The project benefited by having researchers from both nations working side by side as peers, by having both nations investing resources and having an interest in the success of the project, and by having an objective, not of advocacy, but of the application of science to problem solving.

  16. Thermosphere and ionosphere response on seismogenic disturbances of the global electric circuit

    NASA Astrophysics Data System (ADS)

    Karpov, Mikhail; Namgaladze, Aleksandr; Knyazeva, Maria

    2016-04-01

    Conditions of warm, humid and ionized air over the active tectonic faults favor the formation of clouds and generation of the intense vertical electric current between the Earth and ionosphere. The latter arises due to separation and vertical transport of the oppositely charged particles by the gravity force and pressure gradients. Additional transport of charged particles into the ionosphere causes disturbances of the ionosphere plasma (under the action of the electric currents in the E-layer and electromagnetic plasma drift in the F2-layer) and thermosphere neutral gas (via the momentum transfer from electric to neutral particles). The thermosphere and ionosphere variations formed under the action of the electric field created by this vertical electric current have been calculated by using the Upper Atmosphere Model (UAM), and a good agreement was found between observed and the UAM calculated perturbations of the electric field, electron and ion concentrations, total electron content (TEC), ion and electron temperatures as well as wind velocities and neutral gas temeperature and concentrations. The roles of the internal gravity waves and electromagnetic plasma drift in generation of the seismogenic TEC variations are discussed.

  17. Characterization of embedded fiber optic strain sensors into metallic structures via ultrasonic additive manufacturing

    NASA Astrophysics Data System (ADS)

    Schomer, John J.; Hehr, Adam J.; Dapino, Marcelo J.

    2016-04-01

    Fiber Bragg Grating (FBG) sensors measure deviation in a reflected wavelength of light to detect in-situ strain. These sensors are immune to electromagnetic interference, and the inclusion of multiple FBGs on the same fiber allows for a seamlessly integrated sensing network. FBGs are attractive for embedded sensing in aerospace applications due to their small noninvasive size and prospect of constant, real-time nondestructive evaluation. In this study, FBG sensors are embedded in aluminum 6061 via ultrasonic additive manufacturing (UAM), a rapid prototyping process that uses high power ultrasonic vibrations to weld similar and dissimilar metal foils together. UAM was chosen due to the desire to embed FBG sensors at low temperatures, a requirement that excludes other additive processes such as selective laser sintering or fusion deposition modeling. In this paper, the embedded FBGs are characterized in terms of birefringence losses, post embedding strain shifts, consolidation quality, and strain sensing performance. Sensors embedded into an ASTM test piece are compared against an exterior surface mounted foil strain gage at both room and elevated temperatures using cyclic tensile tests.

  18. [Evaluation of an homologous bacterin against bovine leptospirosis].

    PubMed

    Vega, Laura Elena Orozco; Flores, Rafael López; Moles y Cervantes, Luis Pedro; Valiente, Jorge Quiroz

    2005-01-01

    48 adult bovine females dividided into 6 groups were used aimed at characterizing the immune response induced in breastfeeeding cows by an homologous bacterin formulated with different adjuvants. They were intramuscularly administered 2 milliliters of a bacterin formulated with Leptospira interrogans serovars uam, wolffi, hardjo, bratislava, grippotyphosa and panama added with different adjuvants, such as aluminum hydroxide, Freud's complete adjuvant, Freud's incomplete adjuvant, liposoluble vitamins, bacterin plus disparasitization with levamisol. The control group was administred only with bacterin. Immunization took place in 2 occasions at a time interval of 28 days. Blood samples were taken every 7 days during the first month after vaccination, and every 28 days for the next 8 months. All the sera were analyzed by the microscopic agglutination test. The results were transformed into Log10 and they were analyzed by NLIN and GLM of SAS. The period of greater response was estimated by the prediction model (Wood). The bacterin did not produce alteration either in the physiological constants, or in milk production. The serovars of Leptospira interrogans that induced higher titers were uam, hardjo and wolffi. The statistical difference between treatments and between serovars was determined. PMID:17966474

  19. Modeling cholera outbreaks

    PubMed Central

    Longini, Ira M.; Morris, J. Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios. PMID:23412687

  20. Uncertainty Modeling Via Frequency Domain Model Validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Andrisani, Dominick, II

    1999-01-01

    Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.

  1. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  2. Modeling transient rootzone salinity (SWS Model)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The combined, water quality criteria for irrigation, water and ion processes in soils, and plant and soil response is sufficiently complex that adequate analysis requires computer models. Models for management are also needed but these models must consider that the input requirements must be reasona...

  3. China model: Energy modeling the modern dynasty

    SciTech Connect

    Shaw, Jason

    1996-05-01

    In this paper a node-based microeconomic analysis is used to model the Chinese energy system. This model is run across multiple periods employing Lagrangian Relaxation techniques to achieve general equilibrium. Later, carbon dioxide emissions are added and the model is run to answer the question, {open_quotes}How can greenhouse gas emissions be reduced{close_quotes}?

  4. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  5. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  6. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. PMID:26712513

  7. Geologic Framework Model Analysis Model Report

    SciTech Connect

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  8. Models of Magnetism.

    ERIC Educational Resources Information Center

    Borges, A. Tarciso; Gilbert, John K.

    1998-01-01

    Investigates the mental models that people construct about magnetic phenomena. Involves students, physics teachers, engineers, and practitioners. Proposes five models following a progression from simple description to a field model. Contains 28 references. (DDR)

  9. Educating with Aircraft Models

    ERIC Educational Resources Information Center

    Steele, Hobie

    1976-01-01

    Described is utilization of aircraft models, model aircraft clubs, and model aircraft magazines to promote student interest in aerospace education. The addresses for clubs and magazines are included. (SL)

  10. Forest succession models

    SciTech Connect

    Shugart, H.H. Jr.; West, D.C.

    1980-05-01

    Studies in succession attempt to determine the changes in species composition and other ecosystem attributes expected to occur over periods of time. Mathematical models developed in forestry and ecology to study ecological succession are reviewed. Tree models, gap models and forest models are discussed. Model validation or testing procedures are described. Model applications can involve evaluating large-scale and long-term changes in the ambient levels of pollutants and assessing the effects of climate change on the environment. (RJC)

  11. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  12. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  13. Regularized Structural Equation Modeling

    PubMed Central

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  14. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    SciTech Connect

    Clinton Lum

    2002-02-04

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  15. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  16. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding; 2 carbon dioxide miscible flooding; 3 in-situ combustion; 4 polymer flooding; and 5 steamflood. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes. The IBM PC/AT version includes a plotting capability to produces a graphic picture of the predictive model results.

  17. Biosphere Model Report

    SciTech Connect

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  18. Biosphere Model Report

    SciTech Connect

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  19. AIDS Epidemiological models

    NASA Astrophysics Data System (ADS)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  20. On Multiobjective Evolution Model

    NASA Astrophysics Data System (ADS)

    Ahmed, E.; Elettreby, M. F.

    Self-Organized Criticality (SOC) phenomena could have a significant effect on the dynamics of ecosystems. The Bak-Sneppen (BS) model is a simple and robust model of biological evolution that exhibits punctuated equilibrium behavior. Here, we will introduce random version of BS model. We also generalize the single objective BS model to a multiobjective one.

  1. Biomass Scenario Model

    SciTech Connect

    2015-09-01

    The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.

  2. Multimodeling and Model Abstraction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...

  3. Qualitative Student Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model is used as the focus of this review of qualitative student models in order to compare alternative computational models and to contrast domain requirements. The report is divided into eight sections: (1) Origins and Goals (adaptive instruction, qualitative models of processes, components of an artificial…

  4. The Instrumental Model

    ERIC Educational Resources Information Center

    Yeates, Devin Rodney

    2011-01-01

    The goal of this dissertation is to enable better predictive models by engaging raw experimental data through the Instrumental Model. The Instrumental Model captures the protocols and procedures of experimental data analysis. The approach is formalized by encoding the Instrumental Model in an XML record. Decoupling the raw experimental data from…

  5. Generative Models of Disfluency

    ERIC Educational Resources Information Center

    Miller, Timothy A.

    2010-01-01

    This thesis describes a generative model for representing disfluent phenomena in human speech. This model makes use of observed syntactic structure present in disfluent speech, and uses a right-corner transform on syntax trees to model this structure in a very natural way. Specifically, the phenomenon of speech repair is modeled by explicitly…

  6. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing. PMID:26698753

  7. Calibrated Properties Model

    SciTech Connect

    C.F. Ahlers, H.H. Liu

    2001-12-18

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  8. Calibrated Properties Model

    SciTech Connect

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  9. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  10. Stable models of superacceleration

    SciTech Connect

    Kaplinghat, Manoj; Rajaraman, Arvind

    2007-05-15

    We discuss an instability in a large class of models where dark energy is coupled to matter. In these models the mass of the scalar field is much larger than the expansion rate of the Universe. We find models in which this instability is absent, and show that these models generically predict an apparent equation of state for dark energy smaller than -1, i.e., superacceleration. These models have no acausal behavior or ghosts.

  11. Simulation of the Lower and Upper Atmosphere Interconnection

    NASA Astrophysics Data System (ADS)

    Beloushko, Konstantin

    Attempt to develop the united numerical model of the Earth’s atmosphere is undertaken. Purpose of the project is development of the general methodologies and technologies of coupling of models with use of the so-called frame approach [1]. As a basis to this metamodel was taken Upper Atmosphere Model (UAM) [2, 3]. UAM is for today the advanced and perspective Russian model of the upper atmosphere qualitatively comparable to foreign analogues and in a number of aspects surpassing them (on a covered range of heights, for example, from 60 up to 100000 km). As model of the lower atmosphere the General circulation model of atmosphere and ocean by Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS, Moscow) is chosen. This model is equal to quality and competes with modern foreign forecasting models of weather and climate [4]. Coupling of UAM and INM RAS models will allow not only to develop united model of atmosphere of the Earth, but also thus to remove the uncertainty, connected with definition the upper boundary conditions in weather area and lower boundary conditions in upper atmosphere. As a result of named problem analysis the following algorithm for coupling of models is offered: on range of coverage by both models an interval of heights (60-90 km) the iterative exchange of boundary conditions performed. Results of some joined simulations on the basis of model of upper atmosphere UAM and model of general circulation of INM RAS are presented and discussed. Author thanks for discussions and giving model data by A.A. Namgaladze (MSTU, Murmansk) and E.M. Volodin (INM, Moscow). References begin{enumerate} Beloushko K. E. Simulation of the Coupling of the Upper and Lower Atmosphere // Russian Journal of Physical Chemistry B, 2013, Vol. 7, № 6, pp. 783-787. DOI: 10.1134/S199079311305014X Namgaladze A.A., Korenkov Yu.N., Klimenko V.V., Karpov I.V., Bessarab F.S., Surotkin V.A., Gluschenko T.A., Naumova N.M. Global model of the thermosphere

  12. ADAPT model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents an overview of the Agricultural Drainage and Pesticide Transport (ADAPT) model and a case study to illustrate the calibration and validation steps for predicting subsurface tile drainage and nitrate-N losses from an agricultural system. The ADAPT model is a daily time step field ...

  13. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  14. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  15. Identification and influence of spatial outliers in air quality measurements

    NASA Astrophysics Data System (ADS)

    O'Leary, B. F.; Lemke, L. D.

    2015-12-01

    The heterogeneous nature of urban air complicates the analysis of spatial and temporal variability in air quality measurements. Evaluation of potentially inaccurate measurements (i.e., outliers) poses particularly difficult challenges in extensive air quality datasets with multiple measurements distributed in time and space. This study investigated the identification and impact of outliers in measurements of NO­2, BTEX, PM2.5, and PM10 in the contiguous Detroit, Michigan, USA and Windsor, Ontario, Canada international airshed. Measurements were taken at 100 locations during September 2008 and June 2009 and modeled at a 300m by 300m scale resolution. The objective was to determine if outliers were present and, if so, to quantify the magnitude of their impact on modeled spatial pollution distributions. The study built upon previous investigations by the Geospatial Determinants of Health Outcomes Consortium that examined relationships between air pollutant distributions and asthma exacerbations in the Detroit and Windsor airshed. Four independent approaches were initially employed to identify potential outliers: boxplots, variogram clouds, difference maps, and the Local Moran's I statistic. Potential outliers were subsequently reevaluated for consistency among methods and individually assessed to select a final set of outliers. The impact of excluding individual outliers was subsequently determined by revising the spatially variable air pollution models and recalculating associations between air contaminant concentrations and asthma exacerbations in Detroit and Windsor in 2008. For the pollutants examined, revised associations revealed weaker correlations with spatial outliers removed. Nevertheless, the approach employed improves the model integrity by increasing our understanding of the spatial variability of air pollution in the built environment and providing additional insights into the association between acute asthma exacerbations and air pollution.

  16. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  17. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  18. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  19. Modeling extragalactic bowshocks. I. The model.

    NASA Astrophysics Data System (ADS)

    Ferruit, P.; Binette, L.; Sutherland, R. S.; Pecontal, E.

    1997-06-01

    To probe the effects of the nuclear activity on the host galaxy, it is essential to disentangle the relative contribution of shock excitation from that of photoionization. One milestone towards this goal is the ability to model the bowshock structures created by the interaction of radio ejecta with their surrounding medium. We have built a bowshock model based on TDA's one (Taylor, Dyson & Axon, 1992MNRAS.255..351T) which was itself derived from an earlier work on Herbig-Haro objects. Since TDA's original model supplied the astronomers with only [OIII]λ5007 fluxes and profiles for various models of bowshocks, we undertook to include magnetic fields and to incorporate all of the atomic data tables of the code Mappings Ic for the computation of ionization states, cooling rates and line emissivities of the gas. This new model allows us to map line ratios and profiles of extragalactic bowshocks for all major lines of astrophysical interest. In this first paper, we present our model, analyse the gas behavior along the bowshock and give some examples of model results.

  20. Modeling Hydrothermal Mineralization: Fractal or Multifrcatal Models?

    NASA Astrophysics Data System (ADS)

    Cheng, Q.

    2004-05-01

    Hydrothermal mineralization occurs when the natural geo-processes involve the interaction of ore material-carrying hydrothermal fluids with rocks in the earth's crust in a specific geological environment. Mineralization can cause element concentration enrichment or depletion in the country rocks. Local enrichment may form ore body that can be mined for profit at the current economic and technological conditions. To understand the spatial distribution of element concentration enrichment or depletion caused by mineralization in a mineral district is essential for mineral exploration and mineral prediction. Grade-tonnage model and mineral deposits size distribution model are common models used for characterizing mineral deposits. This paper proposes a non-linear mineralization model on the basis of a modified classical igneous differentiation mineralization model to describe the generation of multifractal distribution of element concentration in the country rocks as well as grade-tonnage fractal/multifractal distribution of ore deposits that have been often observed in hydrothermal mineralization. This work may also lead to a singularity model to explain the common properties of mineralization and mineralization-associated geochemical anomaly diversity and the generalized self-similarity of the anomalies. The model has been applied to a case study of mineral deposits prediction and mineral resource assessment in the Abitibi district, northern Ontario, Canada.

  1. Modeling volatility using state space models.

    PubMed

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years). PMID:9730016

  2. Energy-consumption modelling

    SciTech Connect

    Reiter, E.R.

    1980-01-01

    A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.

  3. Holographic twin Higgs model.

    PubMed

    Geller, Michael; Telem, Ofri

    2015-05-15

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider. PMID:26024160

  4. Stochastic modeling of rainfall

    SciTech Connect

    Guttorp, P.

    1996-12-31

    We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.

  5. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  6. Gear mesh compliance modeling

    NASA Technical Reports Server (NTRS)

    Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.

    1986-01-01

    A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed to two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.

  7. Gear mesh compliance modeling

    NASA Technical Reports Server (NTRS)

    Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.

    1987-01-01

    A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed of two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.

  8. Reduced Vector Preisach Model

    NASA Technical Reports Server (NTRS)

    Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)

    2002-01-01

    A new vector Preisach model, called the Reduced Vector Preisach model (RVPM), was developed for fast computations. This model, derived from the Simplified Vector Preisach model (SVPM), has individual components that like the SVPM are calculated independently using coupled selection rules for the state vector computation. However, the RVPM does not require the rotational correction. Therefore, it provides a practical alternative for computing the magnetic susceptibility using a differential approach. A vector version, using the framework of the DOK model, is implemented. Simulation results for the reduced vector Preisach model are also presented.

  9. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  10. A future of the model organism model

    PubMed Central

    Rine, Jasper

    2014-01-01

    Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. However, researchers must take special care and implement new resources to enable the newest members of the community to engage fully with the remarkable legacy of information in these fields. PMID:24577733

  11. 75 FR 39804 - Airworthiness Directives; The Boeing Company Model 757 Airplanes, Model 767 Airplanes, and Model...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... Model 757 Airplanes, Model 767 Airplanes, and Model 777-200 and -300 Series Airplanes AGENCY: Federal... directive (AD) for certain Model 757 airplanes, Model 767 airplanes, and Model 777-200 and -300 series...) that would apply to certain Model 757 airplanes, Model 767 airplanes, and Model 777-200 and -300...

  12. Develop a Model Component

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a

  13. Biosphere Model Report

    SciTech Connect

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  14. Does air pollution pose a public health problem for New Zealand?

    PubMed

    Scoggins, Amanda

    2004-02-01

    Air pollution is increasingly documented as a threat to public health and a major focus of regulatory activity in developed and developing countries. Air quality indicators suggest New Zealand has clean air relative to many other countries. However, media releases such as 'Christchurch wood fires pump out deadly smog' and 'Vehicle pollution major killer' have sparked public health concern regarding exposure to ambient air pollution, especially in anticipation of increasing emissions and population growth. Recent evidence is presented on the effects of air quality on health, which has been aided by the application of urban airshed models and Geographic Information Systems (GIS). Future directions for research into the effects of air quality on health in New Zealand are discussed, including a national ambient air quality management project: HAPINZ--Health and Air Pollution in New Zealand. PMID:15108741

  15. [MODELING INFLAMMATORY EDEMA: ARE THE MODELS INTERCHANGEABLE].

    PubMed

    Hanh, Cong Hong; Khaziakhmetova, V N; Ziganshina, L E

    2015-01-01

    Experimental modeling of inflammatory edema by sub-plantar injection of carrageenan and formalin in mice and rats is widely used to evaluate potential anti-inflammatory activity of new drugs. This systematic analysis of published data showed that carrageenan induced paw edema model is used for evaluating the anti-inflammatory activity mostly in rats rather than mice. Formalin induced paw edema in rats and mice is used primarily for evaluation of the analgesic activity of drugs. Taken together, the results of this systematic review of available literature on edema modeling substantiate recommendation to use carrageenan paw edema in rats and formalin paw edema in mice as complementary, but not interchangeable models of inflammation. PMID:26591204

  16. Model documention: Commercial Sector Energy Model. [CSEM

    SciTech Connect

    Not Available

    1984-08-10

    The CSEM forecasts floorspace area and demand for natural gas, electricity and fuel oil for six building categories in four Census regions. Real disposable personal income, population and real fuel prices are the major exogenous drivers of these forecasts. The commercial model uses the coefficients from the three econometric submodules to calculate building floorspace, end-use fuel choices, and utilization (enegy use per square foot) for the three major fuels. Separately from these structural components the model also calculates energy use for the minor fuels liquefied petroleum gas, kerosene, coal and motor gasoline. Through the use of accounting equations, the commercial model combines the structural components to get total commercial energy use over the major fuels. It then adds in the minor fuels, passes the information back to the other models and writes reports.

  17. Aerosol Modeling for the Global Model Initiative

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.

    2001-01-01

    The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

  18. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  19. Aggregation in ecosystem models and model stability

    NASA Astrophysics Data System (ADS)

    Giricheva, Evgeniya

    2015-05-01

    Using a multimodal approach to research ecosystems improves usage of available information on an object. This study presents several models of the Bering Sea ecosystem. The ecosystem is considered as a closed object, that is, the influence of the environment is not provided. We then add the links with the external medium in the models. The models differ in terms of the degree and method of grouping components. Our method is based on the differences in habitat and food source of groups, which allows us to determine the grouping of species with a greater effect on system dynamics. In particular, we determine whether benthic fish aggregation or pelagic fish aggregation can change the consumption structure of some groups of species, and consequently, the behavior of the entire model system.

  20. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  1. X-33 RCS model

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Model support system and instumentation cabling of the 1% scale X-33 reaction control system model. Installed in the Unitary Plan Wind Tunnel for supersonic testing. In building 1251, test section #2.

  2. Mass modeling for bars

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.

    1987-01-01

    Methods of modeling mass for bars are surveyed. A method for extending John Archer's concept of consistent mass beyond just translational inertia effects is included. Recommendations are given for various types of modeling situations.

  3. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  4. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  5. SEDIMENT GEOCHEMICAL MODEL

    EPA Science Inventory

    Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...

  6. Of Molecules and Models.

    ERIC Educational Resources Information Center

    Brinner, Bonnie

    1992-01-01

    Presents an activity in which models help students visualize both the DNA process and transcription. After constructing DNA, RNA messenger, and RNA transfer molecules; students model cells, protein synthesis, codons, and RNA movement. (MDH)

  7. Modeling Infectious Diseases

    MedlinePlus

    ... MIDAS models require a breadth of knowledge, the network draws together an interdisciplinary team of researchers with expertise in epidemiology, infectious diseases, computational biology, statistics, social sciences, physics, computer sciences and informatics. In 2006, MIDAS modelers simulated ...

  8. Modeling DNA Replication.

    ERIC Educational Resources Information Center

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  9. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  10. TMDL RUSLE MODEL

    EPA Science Inventory

    We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...

  11. Modeling EERE deployment programs

    SciTech Connect

    Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.

    2007-11-01

    The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.

  12. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  13. System Advisor Model

    2010-03-01

    The System Advisor Model (SAM) is a performance and economic model designed to facilitate decision making for people involved in the renewable energy industry, ranging from project managers and engineers to incentive program designers, technology developers, and researchers.

  14. Comparison of Decision Models

    NASA Technical Reports Server (NTRS)

    Feinberg, A.; Miles, J. R. F.; Smith, J. H.; Scheuer, E. M.

    1986-01-01

    Two methods of multiattribute decision analysis compared in report. One method employs linear utility model. Other utilizes multiplicative utility model. Report based on interviews with experts in automotive technology to obtain their preferences regarding 10 new types of vehicles.

  15. Protein solubility modeling

    NASA Technical Reports Server (NTRS)

    Agena, S. M.; Pusey, M. L.; Bogle, I. D.

    1999-01-01

    A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

  16. METEOROLOGICAL AND TRANSPORT MODELING

    EPA Science Inventory

    Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...

  17. Soil moisture modeling review

    NASA Technical Reports Server (NTRS)

    Hildreth, W. W.

    1978-01-01

    A determination of the state of the art in soil moisture transport modeling based on physical or physiological principles was made. It was found that soil moisture models based on physical principles have been under development for more than 10 years. However, these models were shown to represent infiltration and redistribution of soil moisture quite well. Evapotranspiration has not been as adequately incorporated into the models.

  18. Future of groundwater modeling

    USGS Publications Warehouse

    Langevin, Christian D.; Panday, Sorab

    2012-01-01

    With an increasing need to better manage water resources, the future of groundwater modeling is bright and exciting. However, while the past can be described and the present is known, the future of groundwater modeling, just like a groundwater model result, is highly uncertain and any prediction is probably not going to be entirely representative. Thus we acknowledge this as we present our vision of where groundwater modeling may be headed.

  19. Modeling of spacecraft charging

    NASA Technical Reports Server (NTRS)

    Whipple, E. C., Jr.

    1977-01-01

    Three types of modeling of spacecraft charging are discussed: statistical models, parametric models, and physical models. Local time dependence of circuit upset for DoD and communication satellites, and electron current to a sphere with an assumed Debye potential distribution are presented. Four regions were involved in spacecraft charging: (1) undisturbed plasma, (2) plasma sheath region, (3) spacecraft surface, and (4) spacecraft equivalent circuit.

  20. Hierarchical Bass model

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  1. Modeling the Pacific Ocean

    SciTech Connect

    Johnson, M.A.; O'Brien, J.J. )

    1990-01-01

    Two numerical models utilizing primitive equations (two momentum equations and a mass continuity equation) simulate the oceanography of the Pacific Ocean from 20{degrees}S to 50{degrees}N. The authors examine the abundant model data through visualization , by animating the appropriate model fields and viewing the time history of each model simulation as a color movie. The animations are used to aid understanding of ocean circulation.

  2. Avionics Architecture Modelling Language

    NASA Astrophysics Data System (ADS)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  3. Current sheet model

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The model of a rectenna based on the current sheet equivalency of a large planar array is described. The model is mathematically characterized by expression for the fraction of the incident plane wave that is reflected from the sheet. The model is conceptually justified for normal incidence by comparing it to the waveguide model in which evanescent modes, present as beyond and cutoff, correspond to the near field components which become negligible at any significant distance from the antenna array.

  4. Modeling Complex Calorimeters

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali

    2004-01-01

    We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.

  5. Mathematical circulatory system model

    NASA Technical Reports Server (NTRS)

    Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)

    2010-01-01

    A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.

  6. Campus Energy Modeling Platform

    2014-09-19

    NREL's Campus Energy Modeling project provides a suite of simulation tools for integrated, data driven energy modeling of commercial buildings and campuses using Simulink. The tools enable development of fully interconnected models for commercial campus energy infrastructure, including electrical distribution systems, district heating and cooling, onsite generation (both conventional and renewable), building loads, energy storage, and control systems.

  7. Models, Norms and Sharing.

    ERIC Educational Resources Information Center

    Harris, Mary B.

    To investigate the effect of modeling on altruism, 156 third and fifth grade children were exposed to a model who either shared with them, gave to a charity, or refused to share. The test apparatus, identified as a game, consisted of a box with signal lights and a chute through which marbles were dispensed. Subjects and the model played the game…

  8. Biophysical and spectral modeling

    NASA Technical Reports Server (NTRS)

    Goel, N. S. (Principal Investigator)

    1982-01-01

    Activities and results of a project to develop strategies for modeling vegetative canopy reflectance are reported. Specific tasks included the inversion of canopy reflectance models to estimate agronomic variables (particularly leaf area index) from in-situ reflectance measurements, and a study of possible uses of ecological models in analyzing temporal profiles of greenness.

  9. A Model Chemistry Class.

    ERIC Educational Resources Information Center

    Summerlin, Lee; Borgford, Christie

    1989-01-01

    Described is an activity which uses a 96-well reaction plate and soda straws to construct a model of the periodic table of the elements. The model illustrates the ionization energies of the various elements. Construction of the model and related concepts are discussed. (CW)

  10. What Is a Model?

    ERIC Educational Resources Information Center

    McNamara, James F.

    1996-01-01

    Uses R.A. Ackoff's connotations to define "model" as noun, adjective, and verb. Researchers should use various types of models (iconic, analogue, or symbolic) for three purposes: to reveal reality, to explain the past and present, and to predict and control the future. Herbert Simon's process model for administrative decision making has widespread…

  11. Crushed Salt Constitutive Model

    SciTech Connect

    Callahan, G.D.

    1999-02-01

    The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.

  12. Modeling rapidly rotating stars

    NASA Astrophysics Data System (ADS)

    Rieutord, M.

    2006-06-01

    We review the quest of modeling rapidly rotating stars during the past 40 years and detail the challenges to be taken up by models facing new data from interferometry, seismology, spectroscopy... We then present the progress of the ESTER project aimed at giving a physically self-consistent model for the structure and evolution of rapidly rotating stars.

  13. Retrofitted supersymmetric models

    NASA Astrophysics Data System (ADS)

    Bose, Manatosh

    This thesis explores several models of metastable dynamic supersymmetry breaking (MDSB) and a supersymmetric model of hybrid inflation. All of these models possess discrete R-symmetries. We specially focus on the retrofitted models for supersymmetry breaking models. At first we construct retrofitted models of gravity mediation. In these models we explore the genericity of the so-called "split supersymmetry." We show that with the simplest models, where the goldstino multiplet is neutral under the discrete R-symmetry, a split spectrum is not generic. However if the goldstino superfield is charged under some symmetry other than the R-symmetry, then a split spectrum is achievable but not generic. We also present a gravity mediated model where the fine tuning of the Z-boson mass is dictated by a discrete choice rather than a continuous tuning. Then we construct retrofitted models of gauge mediated SUSY breaking. We show that, in these models, if the approximate R-symmetry of the theory is spontaneously broken, the messenger scale is fixed; if explicitly broken by retrofitted couplings, a very small dimensionless number is required; if supergravity corrections are responsible for the symmetry breaking, at least two moderately small couplings are required, and that there is a large range of possible messenger scales. Finally we switch our attention to small field hybrid inflation. We construct a model that yields a spectral index ns = 0.96. Here, we also briefly discuss the possibility of relating the scale of inflation with the dynamics responsible for supersymmetry breaking.

  14. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  15. A Model Performance

    ERIC Educational Resources Information Center

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  16. Impact-GMI Model

    2007-03-22

    IMPACT-GMI is an atmospheric chemical transport model designed to run on massively parallel computers. It is designed to model trace pollutants in the atmosphere. It includes models for emission, chemistry and deposition of pollutants. It can be used to assess air quality and its impact on future climate change.

  17. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  18. SECOND GENERATION MODEL

    EPA Science Inventory

    One of the environmental and economic models that the U.S. EPA uses to assess climate change policies is the Second Generation Model (SGM). SGM is a 13 region, 24 sector computable general equilibrium (CGE) model of the world that can be used to estimate the domestic and intern...

  19. Progress in mix modeling

    SciTech Connect

    Harrison, A.K.

    1997-03-14

    We have identified the Cranfill multifluid turbulence model (Cranfill, 1992) as a starting point for development of subgrid models of instability, turbulent and mixing processes. We have differenced the closed system of equations in conservation form, and coded them in the object-oriented hydrodynamics code FLAG, which is to be used as a testbed for such models.

  20. Modeling EERE Deployment Programs

    SciTech Connect

    Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.

    2007-11-01

    This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.

  1. Modern Media Education Models

    ERIC Educational Resources Information Center

    Fedorov, Alexander

    2011-01-01

    The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…

  2. Rock Properties Model

    SciTech Connect

    C. Lum

    2004-09-16

    The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.

  3. Modeling Climate Dynamically

    ERIC Educational Resources Information Center

    Walsh, Jim; McGehee, Richard

    2013-01-01

    A dynamical systems approach to energy balance models of climate is presented, focusing on low order, or conceptual, models. Included are global average and latitude-dependent, surface temperature models. The development and analysis of the differential equations and corresponding bifurcation diagrams provides a host of appropriate material for…

  4. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  5. Model Rockets and Microchips.

    ERIC Educational Resources Information Center

    Fitzsimmons, Charles P.

    1986-01-01

    Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)

  6. New Directions for Modeling?

    ERIC Educational Resources Information Center

    Mason, Thomas R.

    1976-01-01

    Noting the disappointing results of past experimentation with computer modeling technology in higher education, the author discusses developments which promise potential: communication between model builders and users, interaction between large- and small-scale models, interface with operating data systems, emphasis on outcomes, and continued…

  7. General Graded Response Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    This paper describes the graded response model. The graded response model represents a family of mathematical models that deal with ordered polytomous categories, such as: (1) letter grading; (2) an attitude survey with "strongly disagree, disagree, agree, and strongly agree" choices; (3) partial credit given in accord with an individual's degree…

  8. Models for Products

    ERIC Educational Resources Information Center

    Speiser, Bob; Walter, Chuck

    2011-01-01

    This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…

  9. REGULATORY AIR QUALITY MODELS

    EPA Science Inventory

    Appendix W to 40CFR Part 51 (Guideline on Air Quality Models) specifies the models to be used for purposes of permitting, PSD, and SIPs. Through a formal regulatory process this modeling guidance is periodically updated to reflect current science. In the most recent action, thr...

  10. Molecular Models in Biology

    ERIC Educational Resources Information Center

    Goodman, Richard E.

    1970-01-01

    Describes types of molecular models (ball-and-stick, framework, and space-filling) and evaluates commercially available kits. Gives instructions for constructive models from polystyrene balls and pipe-cleaners. Models are useful for class demonstrations although not sufficiently accurate for research use. Illustrations show biologically important…

  11. Modeling and Remodeling Writing

    ERIC Educational Resources Information Center

    Hayes, John R.

    2012-01-01

    In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…

  12. IR DIAL performance modeling

    SciTech Connect

    Sharlemann, E.T.

    1994-07-01

    We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

  13. Global Timber Model (GTM)

    EPA Science Inventory

    GTM is an economic model capable of examining global forestry land-use, management, and trade responses to policies. In responding to a policy, the model captures afforestation, forest management, and avoided deforestation behavior. The model estimates harvests in industrial fore...

  14. Hierarchical Models of Attitude.

    ERIC Educational Resources Information Center

    Reddy, Srinivas K.; LaBarbera, Priscilla A.

    1985-01-01

    The application and use of hierarchical models is illustrated, using the example of the structure of attitudes toward a new product and a print advertisement. Subjects were college students who responded to seven-point bipolar scales. Hierarchical models were better than nonhierarchical models in conceptualizing attitude but not intention. (GDC)

  15. Generalized Latent Trait Models.

    ERIC Educational Resources Information Center

    Moustaki, Irini; Knott, Martin

    2000-01-01

    Discusses a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. Presents a unified maximum likelihood method for estimating the parameters of the generalized latent trait model and discusses the scoring of individuals on the latent dimensions.…

  16. Modelling Vocabulary Loss

    ERIC Educational Resources Information Center

    Meara, Paul

    2004-01-01

    This paper describes some simple simulation models of vocabulary attrition. The attrition process is modelled using a random autonomous Boolean network model, and some parallels with real attrition data are drawn. The paper argues that applying a complex systems approach to attrition can provide some important insights, which suggest that real…

  17. Modelling MIZ dynamics in a global model

    NASA Astrophysics Data System (ADS)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  18. Advances in Watershed Models and Modeling

    NASA Astrophysics Data System (ADS)

    Yeh, G. T.; Zhang, F.

    2015-12-01

    The development of watershed models and their applications to real-world problems has evolved significantly since 1960's. Watershed models can be classified based on what media are included, what processes are dealt with, and what approaches are taken. In term of media, a watershed may include segregated overland regime, river-canal-open channel networks, ponds-reservoirs-small lakes, and subsurface media. It may also include integrated media of all these or a partial set of these as well as man-made control structures. In term of processes, a watershed model may deal with coupled or decoupled hydrological and biogeochemical cycles. These processes include fluid flow, thermal transport, salinity transport, sediment transport, reactive transport, and biota and microbe kinetics. In terms of approaches, either parametric or physics-based approach can be taken. This talk discusses the evolution of watershed models in the past sixty years. The advances of watershed models center around their increasing design capability to foster these segregated or integrated media and coupled or decoupled processes. Widely used models developed by academia, research institutes, government agencies, and private industries will be reviewed in terms of the media and processes included as well as approaches taken. Many types of potential benchmark problems in general can be proposed and will be discussed. This presentation will focus on three benchmark problems of biogeochemical cycles. These three problems, dealing with water quality transport, will be formulated in terms of reactive transport. Simulation results will be illustrated using WASH123D, a watershed model developed and continuously updated by the author and his PhD graduates. Keywords: Hydrological Cycles, Biogeochemical Cycles, Biota Kinetics, Parametric Approach, Physics-based Approach, Reactive Transport.

  19. Modeling agriculture in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Drewniak, B.; Song, J.; Prell, J.; Kotamarthi, V. R.; Jacob, R.

    2013-04-01

    The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types - maize, soybean, and spring wheat - into the coupled carbon-nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements for soybean, but not as well for maize. CLM-Crop yields were comparable with observations in countries such as the United States, Argentina, and China, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation, in agreement with other modeling studies. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model - simulating agriculture in a realistic way, complete with fertilizer and residue management

  20. Technical Report -- Essentials of which will be published as a journal paper

    SciTech Connect

    Harindra J. S. Fernando; James Anderson; Don Boyer

    2005-10-25

    Vertical Transport and Mixing in Complex Terrain Airsheds: Implementation of a Stable PBL Turbulence Parameterization for the Mesoscale Model MM5 The difficulties associated with parameterization of turbulence in the stable nocturnal atmospheric boundary layer have been a great challenge for the night-time predictions of mesoscale meteorological models such as MM5. As such, there is a general consensus on the need for better stable boundary-layer parameterizations. To this end, two new turbulence parameterizations based on the measurements of the Vertical Transport and Mixing (VTMX) field campaign were implemented and evaluated in MM5. A unique aspect of this parameterization is the use of a stability dependent turbulent Prandtl number that allows momentum to be transported by the internal waves, while heat diffusion is impeded by the stratification. This improvement alleviates the problem of over-prediction of heat diffusion under stable conditions, which is a characteristic of conventional PBL schemes, such as MRF and Blackadar schemes employed in MM5. The predictions made with the new PBL scheme for the complex terrain airshed of Salt Lake City were compared with those made with a default scheme of MM5 and with observations made during the VTMX campaign. The new schemes showed an improvement in predictions, particularly for the nocturnal near surface temperature. Surface wind predictions also improved slightly, but not to the extent of temperature predictions. The default MRF scheme showed a significantly warmer surface temperature than observed, which could be attributed to the enhanced vertical heat exchange brought about by its turbulence parameterization. The modified parameterizations reduced the surface sensible heat flux, thus enhancing the strength of the near surface inversion and lowering the temperature toward the observed values.

  1. Pediatric Computational Models

    NASA Astrophysics Data System (ADS)

    Soni, Bharat K.; Kim, Jong-Eun; Ito, Yasushi; Wagner, Christina D.; Yang, King-Hay

    A computational model is a computer program that attempts to simulate a behavior of a complex system by solving mathematical equations associated with principles and laws of physics. Computational models can be used to predict the body's response to injury-producing conditions that cannot be simulated experimentally or measured in surrogate/animal experiments. Computational modeling also provides means by which valid experimental animal and cadaveric data can be extrapolated to a living person. Widely used computational models for injury biomechanics include multibody dynamics and finite element (FE) models. Both multibody and FE methods have been used extensively to study adult impact biomechanics in the past couple of decades.

  2. UZ Colloid Transport Model

    SciTech Connect

    M. McGraw

    2000-04-13

    The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.

  3. Pilot model hypothesis testing

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.; Berry, P. W.

    1982-01-01

    The aircraft control time history predicted by the optimal control pilot model and actual pilot tracking data obtained from NASA Langley's differential maneuvering simulator (DMS) are analyzed. The analysis is performed using a hypothesis testing scheme modified to allow for changes in the true hypothesis. A finite number of pilot models, each with different hypothesized internal model representations of the aircraft dynamics, are constructed. The hypothesis testing scheme determines the relative probability that each pilot model best matches the DMS data. By observing the changes in probabilities, it is possible to determine when the pilot changes control strategy and which hypothesized pilot model best represent's the pilot's control behavior.

  4. Models of Goldstone gauginos

    NASA Astrophysics Data System (ADS)

    Alves, Daniele S. M.; Galloway, Jamison; McCullough, Matthew; Weiner, Neal

    2016-04-01

    Models with Dirac gauginos are appealing scenarios for physics beyond the Standard Model. They have smaller radiative corrections to scalar soft masses, a suppression of certain supersymmetry (SUSY) production processes at the LHC, and ameliorated flavor constraints. Unfortunately, they are generically plagued by tachyons charged under the Standard Model, and attempts to eliminate such states typically spoil the positive features. The recently proposed "Goldstone gaugino" mechanism provides a simple realization of Dirac gauginos that is automatically free of dangerous tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply and discuss scenarios for unification.

  5. The FREZCHEM Model

    NASA Astrophysics Data System (ADS)

    Marion, Giles M.; Kargel, Jeffrey S.

    Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.

  6. Surrogate waveform models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.

  7. CRAC2 model description

    SciTech Connect

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  8. Adaptive background model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaochun; Xiao, Yijun; Chai, Zhi; Wang, Bangping

    2007-11-01

    An adaptive background model aiming at outdoor vehicle detection is presented in this paper. This model is an improved model of PICA (pixel intensity classification algorithm), it classifies pixels into K-distributions by color similarity, and then a hypothesis that the background pixel color appears in image sequence with a high frequency is used to evaluate all the distributions to determine which presents the current background color. As experiments show, the model presented in this paper is a robust, adaptive and flexible model, which can deal with situations like camera motions, lighting changes and so on.

  9. Complex matrix model duality

    SciTech Connect

    Brown, T. W.

    2011-04-15

    The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.

  10. Modelling Farm Animal Welfare

    PubMed Central

    Collins, Lisa M.; Part, Chérie E.

    2013-01-01

    Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411

  11. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  12. Animal models of atherosclerosis

    PubMed Central

    Kapourchali, Fatemeh Ramezani; Surendiran, Gangadaran; Chen, Li; Uitz, Elisabeth; Bahadori, Babak; Moghadasian, Mohammed H

    2014-01-01

    In this mini-review several commonly used animal models of atherosclerosis have been discussed. Among them, emphasis has been made on mice, rabbits, pigs and non-human primates. Although these animal models have played a significant role in our understanding of induction of atherosclerotic lesions, we still lack a reliable animal model for regression of the disease. Researchers have reported several genetically modified and transgenic animal models that replicate human atherosclerosis, however each of current animal models have some limitations. Among these animal models, the apolipoprotein (apo) E-knockout (KO) mice have been used extensively because they develop spontaneous atherosclerosis. Furthermore, atherosclerotic lesions developed in this model depending on experimental design may resemble humans’ stable and unstable atherosclerotic lesions. This mouse model of hypercholesterolemia and atherosclerosis has been also used to investigate the impact of oxidative stress and inflammation on atherogenesis. Low density lipoprotein (LDL)-r-KO mice are a model of human familial hypercholesterolemia. However, unlike apo E-KO mice, the LDL-r-KO mice do not develop spontaneous atherosclerosis. Both apo E-KO and LDL-r-KO mice have been employed to generate other relevant mouse models of cardiovascular disease through breeding strategies. In addition to mice, rabbits have been used extensively particularly to understand the mechanisms of cholesterol-induced atherosclerosis. The present review paper details the characteristics of animal models that are used in atherosclerosis research. PMID:24868511

  13. Calibrated Properties Model

    SciTech Connect

    T. Ghezzehej

    2004-10-04

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.

  14. A model of strength

    USGS Publications Warehouse

    Johnson, Douglas H.; Cook, R.D.

    2013-01-01

    In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.

  15. Titan atmospheric models intercomparison

    NASA Astrophysics Data System (ADS)

    Pernot, P.

    2008-09-01

    Several groups over the world have developed independently models of the photochemistry of Titan. The Cassini mission reveals daily that the chemical complexity is beyond our expectations e. g. observation of heavy positive and negative ions..., and the models are updated accordingly. At this stage, there is no consensus on the various input parameters, and it becomes increasingly difficult to compare outputs form different models. An ISSI team of experts of those models will be gathered shortly to proceed to an intercomparison, i.e. to assess how the models behave, given identical sets of inputs (collectively defined). Expected discrepancies will have to be elucidated and reduced. This intercomparison will also be an occasion to estimate explicitly the importance of various physicalchemical processes on model predictions versus observations. More robust and validated models are expected from this study for the interpretation of Titanrelated data.

  16. Multiscale Modeling of Recrystallization

    SciTech Connect

    Godfrey, A.W.; Holm, E.A.; Hughes, D.A.; Lesar, R.; Miodownik, M.A.

    1998-12-07

    We propose a multi length scale approach to modeling recrystallization which links a dislocation model, a cell growth model and a macroscopic model. Although this methodology and linking framework will be applied to recrystallization, it is also applicable to other types of phase transformations in bulk and layered materials. Critical processes such as the dislocation structure evolution, nucleation, the evolution of crystal orientations into a preferred texture, and grain size evolution all operate at different length scales. In this paper we focus on incorporating experimental measurements of dislocation substructures, rnisorientation measurements of dislocation boundaries, and dislocation simulations into a mesoscopic model of cell growth. In particular, we show how feeding information from the dislocation model into the cell growth model can create realistic initial microstructure.

  17. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  18. Foam process models.

    SciTech Connect

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  19. Minimal quiver standard model

    SciTech Connect

    Berenstein, David; Pinansky, Samuel

    2007-05-01

    This paper discusses the minimal quiver gauge theory embedding of the standard model that could arise from brane world type string theory constructions. It is based on the low energy effective field theory of D branes in the perturbative regime. The model differs from the standard model by the addition of one extra massive gauge boson, and contains only one additional parameter to the standard model: the mass of this new particle. The coupling of this new particle to the standard model is uniquely determined by input from the standard model and consistency conditions of perturbative string theory. We also study some aspects of the phenomenology of this model and bounds on its possible observation at the Large Hadron Collider.

  20. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  1. Ventilation Model Report

    SciTech Connect

    V. Chipman; J. Case

    2002-12-20

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of

  2. Integrated modeling, data transfers, and physical models

    NASA Astrophysics Data System (ADS)

    Brookshire, D. S.; Chermak, J. M.

    2003-04-01

    Difficulties in developing precise economic policy models for water reallocation and re-regulation in various regional and transboundary settings has been exacerbated not only by climate issues but also by institutional changes reflected in the promulgation of environmental laws, changing regional populations, and an increased focus on water quality standards. As complexity of the water issues have increased, model development at a micro-policy level is necessary to capture difficult institutional nuances and represent the differing national, regional and stakeholders' viewpoints. More often than not, adequate "local" or specific micro-data are not available in all settings for modeling and policy decisions. Economic policy analysis increasingly deals with this problem through data transfers (transferring results from one study area to another) and significant progress has been made in understanding the issue of the dimensionality of data transfers. This paper explores the conceptual and empirical dimensions of data transfers in the context of integrated modeling when the transfers are not only from the behavioral, but also from the hard sciences. We begin by exploring the domain of transfer issues associated with policy analyses that directly consider uncertainty in both the behavioral and physical science settings. We then, through a stylized, hybrid, economic-engineering model of water supply and demand in the Middle Rio Grand Valley of New Mexico (USA) analyze the impacts of; (1) the relative uncertainty of data transfers methods, (2) the uncertainty of climate data and, (3) the uncertainly of population growth. These efforts are motivated by the need to address the relative importance of more accurate data both from the physical sciences as well as from demography and economics for policy analyses. We evaluate the impacts by empirically addressing (within the Middle Rio Grand model): (1) How much does the surrounding uncertainty of the benefit transfer

  3. Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling.

    PubMed

    Yesson, C; Culham, A

    2006-10-01

    We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of

  4. Modeling agriculture in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Drewniak, B.; Song, J.; Prell, J.; Kotamarthi, V. R.; Jacob, R.

    2012-12-01

    The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types - maize, soybean, and spring wheat - into the coupled carbon-nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements. CLM-Crop yields were comparable with observations in some regions, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model - simulating agriculture in a realistic way, complete with fertilizer and residue management practices. Results are encouraging, with improved representation of human influences on the land surface and the potentially

  5. Loehlin's original models and model contributions.

    PubMed

    McArdle, John J

    2014-11-01

    This is a short story about John C. Loehlin who is now at the University of Texas at Austin, dealing with his original simulation models and developments, which led to his current latent variable models. This talk was initially presented at a special meeting for John before the BGA in Rhode Island, and I was very pleased to contribute. It probably goes without saying, but John helped create this important society, has been a key contributor to this journal for several decades, and he deserves a lot for this leadership. PMID:25367673

  6. Modeling local dependence in longitudinal IRT models.

    PubMed

    Olsbjerg, Maja; Christensen, Karl Bang

    2015-12-01

    Measuring change in a latent variable over time is often done using the same instrument at several time points. This can lead to dependence between responses across time points for the same person yielding within person correlations that are stronger than what can be attributed to the latent variable. Ignoring this can lead to biased estimates of changes in the latent variable. In this paper we propose a method for modeling local dependence in the longitudinal 2PL model. It is based on the concept of item splitting, and makes it possible to correctly estimate change in the latent variable. PMID:25552424

  7. Constitutive models in LAME.

    SciTech Connect

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented

  8. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  9. Geochemical modeling: a review

    SciTech Connect

    Jenne, E.A.

    1981-06-01

    Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.

  10. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections. PMID:26353238

  11. The Earth System Model

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol

    2003-01-01

    The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.

  12. Kp forecast models

    NASA Astrophysics Data System (ADS)

    Meng, C.; Wing, S.; Johnson, J. R.; Jen, J.; Carr, S.; Sibeck, D. G.; Costello, K.; Freeman, J.; Balikhin, M.; Bechtold, K.; Vandegriff, J.

    2004-12-01

    Magnetically active times, e.g., Kp > 5, are notoriously difficult to predict, precisely when the predictions are crucial to the space weather users. Taking advantage of the routinely available solar wind measurements at Langrangian point (L1) and nowcast Kps, Kp forecast models based on neural networks were developed with the focus on improving the forecast for active times. In order to satisfy different needs and operational constraints, three models were developed: (1) model that inputs nowcast Kp, solar wind parameters, and predict Kp 1 hr ahead; (2) model with the same input as (1) and predict Kp 4 hr ahead; and (3) model that inputs only solar wind parameters and predict Kp 1 hr ahead (the exact prediction lead time depends on the solar wind speed and the location of the solar wind monitor). Extensive evaluations of these models and other major operational Kp forecast models show that while the new models can predict Kps more accurately for all activities, the most dramatic improvements occur for moderate and active times. The evaluations of the models over 2 solar cycles, 1975-2001, show that solar wind driven models predict Kp more accurately during solar maximum than solar minimum. This result, as well as information dynamics analysis of Kp, suggests that geospace is more dominated by internal dynamics during solar minimum than solar maximum, when it is more directly driven by external inputs, namely solar wind and IMF.

  13. Preliminary DIAL model

    SciTech Connect

    Gentry, S.; Taylor, J.; Stephenson, D.

    1994-06-01

    A unique end-to-end LIDAR sensor model has been developed supporting the concept development stage of the CALIOPE UV DIAL and UV laser-induced-fluorescence (LIF) efforts. The model focuses on preserving the temporal and spectral nature of signals as they pass through the atmosphere, are collected by the optics, detected by the sensor, and processed by the sensor electronics and algorithms. This is done by developing accurate component sub-models with realistic inputs and outputs, as well as internal noise sources and operating parameters. These sub-models are then configured using data-flow diagrams to operate together to reflect the performance of the entire DIAL system. This modeling philosophy allows the developer to have a realistic indication of the nature of signals throughout the system and to design components and processing in a realistic environment. Current component models include atmospheric absorption and scattering losses, plume absorption and scattering losses, background, telescope and optical filter models, PMT (photomultiplier tube) with realistic noise sources, amplifier operation and noise, A/D converter operation, noise and distortion, pulse averaging, and DIAL computation. Preliminary results of the model will be presented indicating the expected model operation depicting the October field test at the NTS spill test facility. Indications will be given concerning near-term upgrades to the model.

  14. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  15. [Mathematical models of hysteresis

    SciTech Connect

    Mayergoyz, I.D.

    1991-01-01

    The research described in this proposal is currently being supported by the US Department of Energy under the contract Mathematical Models of Hysteresis''. Thus, before discussing the proposed research in detail, it is worthwhile to describe and summarize the main results achieved in the course of our work under the above contract. Our ongoing research has largely been focused on the development of mathematical models of hysteretic nonlinearities with nonlocal memories''. The distinct feature of these nonlinearities is that their current states depend on past histories of input variations. It turns out that memories of hysteretic nonlinearities are quite selective. Indeed, experiments show that only some past input extrema leave their marks upon future states of hysteretic nonlinearities. Thus special mathematical tools are needed in order to describe nonlocal selective memories of hysteretic nonlinearities. Our research has been primarily concerned with Preisach-type models of hysteresis. All these models have a common generic feature; they are constructed as superpositions of simplest hysteretic nonlinearities-rectangular loops. Our study has by and large been centered around the following topics: various generalizations and extensions of the classical Preisach model, finding of necessary and sufficient conditions for the representation of actual hysteretic nonlinearities by various Preisach type models, solution of identification problems for these models, numerical implementation and experimental testing of Preisach type models. Although the study of Preisach type models has constituted the main direction of the research, some effort has also been made to establish some interesting connections between these models and such topics as: the critical state model for superconducting hysteresis, the classical Stoner-Wohlfarth model of vector magnetic hysteresis, thermal activation type models for viscosity, magnetostrictive hysteresis and neural networks.

  16. A Rasch Hierarchical Measurement Model.

    ERIC Educational Resources Information Center

    Maier, Kimberly S.

    This paper describes a model that integrates an item response theory (IRT) Rasch model and a hierarchical linear model and presents a method of estimating model parameter values that does not rely on large-sample theory and normal approximations. The model resulting from the integration of a hierarchical linear model and the Rasch model allows one…

  17. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  18. Modeling Imports in a Keynesian Expenditure Model

    ERIC Educational Resources Information Center

    Findlay, David W.

    2010-01-01

    The author discusses several issues that instructors of introductory macroeconomics courses should consider when introducing imports in the Keynesian expenditure model. The analysis suggests that the specification of the import function should partially, if not completely, be the result of a simple discussion about the spending and import…

  19. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  20. Animal models of fibromyalgia

    PubMed Central

    2013-01-01

    Animal models of disease states are valuable tools for developing new treatments and investigating underlying mechanisms. They should mimic the symptoms and pathology of the disease and importantly be predictive of effective treatments. Fibromyalgia is characterized by chronic widespread pain with associated co-morbid symptoms that include fatigue, depression, anxiety and sleep dysfunction. In this review, we present different animal models that mimic the signs and symptoms of fibromyalgia. These models are induced by a wide variety of methods that include repeated muscle insults, depletion of biogenic amines, and stress. All potential models produce widespread and long-lasting hyperalgesia without overt peripheral tissue damage and thus mimic the clinical presentation of fibromyalgia. We describe the methods for induction of the model, pathophysiological mechanisms for each model, and treatment profiles. PMID:24314231

  1. Multiscale Cancer Modeling

    PubMed Central

    Macklin, Paul; Cristini, Vittorio

    2013-01-01

    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  2. Outside users payload model

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.

  3. Teaching macromolecular modeling.

    PubMed

    Harvey, S C; Tan, R K

    1992-12-01

    Training newcomers to the field of macromolecular modeling is as difficult as is training beginners in x-ray crystallography, nuclear magnetic resonance, or other methods in structural biology. In one or two lectures, the most that can be conveyed is a general sense of the relationship between modeling and other structural methods. If a full semester is available, then students can be taught how molecular structures are built, manipulated, refined, and analyzed on a computer. Here we describe a one-semester modeling course that combines lectures, discussions, and a laboratory using a commercial modeling package. In the laboratory, students carry out prescribed exercises that are coordinated to the lectures, and they complete a term project on a modeling problem of their choice. The goal is to give students an understanding of what kinds of problems can be attacked by molecular modeling methods and which problems are beyond the current capabilities of those methods. PMID:1489919

  4. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  5. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  6. Energy balance climate models

    NASA Technical Reports Server (NTRS)

    North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.

    1981-01-01

    An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved, and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.

  7. Models for poloidal divertors

    SciTech Connect

    Post, D.E.; Heifetz, D.; Petravic, M.

    1982-07-01

    Recent progress in models for poloidal divertors has both helped to explain current divertor experiments and contributed significantly to design efforts for future large tokamak (INTOR, etc.) divertor systems. These models range in sophistication from zero-dimensional treatments and dimensional analysis to two-dimensional models for plasma and neutral particle transport which include a wide variety of atomic and molecular processes as well as detailed treatments of the plasma-wall interaction. This paper presents a brief review of some of these models, describing the physics and approximations involved in each model. We discuss the wide variety of physics necessary for a comprehensive description of poloidal divertors. To illustrate the progress in models for poloidal divertors, we discuss some of our recent work as typical examples of the kinds of calculations being done.

  8. Load Model Data Tool

    SciTech Connect

    David Chassin, Pavel Etingov

    2013-04-30

    The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to be provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.

  9. Load Model Data Tool

    2013-04-30

    The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to bemore » provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.« less

  10. Liftoff Model for MELCOR.

    SciTech Connect

    Young, Michael F.

    2015-07-01

    Aerosol particles that deposit on surfaces may be subsequently resuspended by air flowing over the surface. A review of models for this liftoff process is presented and compared to available data. Based on this review, a model that agrees with existing data and is readily computed is presented for incorporation into a system level code such as MELCOR. Liftoff Model for MELCOR July 2015 4 This page is intentionally blank

  11. Invariant turbulence models

    NASA Astrophysics Data System (ADS)

    Bihlo, Alexander; Dos Santos Cardoso-Bihlo, Elsa Maria; Nave, Jean-Christophe; Popovych, Roman

    2012-11-01

    Various subgrid-scale closure models break the invariance of the Euler or Navier-Stokes equations and thus violate the geometric structure of these equations. A method is shown which allows one to systematically derive invariant turbulence models starting from non-invariant turbulence models and thus to correct artificial symmetry-breaking. The method is illustrated by finding invariant hyperdiffusion schemes to be applied in the two-dimensional turbulence problem.

  12. Modelling Pediatric Kinematics

    PubMed Central

    van Ratingen, M.R.; Wismans, J.

    1998-01-01

    In the field of pediatric biomechanics, crash dummy and numerical model development suffers from too limited human subject data to directly establish response and injury values. In order to create child crash dummies and numerical models it is necessary to combine the results from real world accident and reconstruction data, scaled adult data and data from animal testing with limited child volunteer data. This paper presents the functional and biomechanical targets for child crash dummies and numerical models.

  13. Acid rain: Mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H. M.

    1980-01-01

    A mesoscale numerical model of the Florida peninsula was formulated and applied to a dry, neutral atmosphere. The prospective use of the STAR-100 computer for the submesoscale model is discussed. The numerical model presented is tested under synoptically undisturbed conditions. Two cases, differing only in the direction of the prevailing geostrophic wind, are examined: a prevailing southwest wind and a prevailing southeast wind, both 6 m/sec at all levels initially.

  14. AREST model description

    SciTech Connect

    Engel, D.W.; McGrail, B.P.

    1993-11-01

    The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST.

  15. Global Atmospheric Aerosol Modeling

    NASA Technical Reports Server (NTRS)

    Hendricks, Johannes; Aquila, Valentina; Righi, Mattia

    2012-01-01

    Global aerosol models are used to study the distribution and properties of atmospheric aerosol particles as well as their effects on clouds, atmospheric chemistry, radiation, and climate. The present article provides an overview of the basic concepts of global atmospheric aerosol modeling and shows some examples from a global aerosol simulation. Particular emphasis is placed on the simulation of aerosol particles and their effects within global climate models.

  16. Computer Models of Proteins

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.

  17. Rat Endovascular Perforation Model

    PubMed Central

    Sehba, Fatima A.

    2014-01-01

    Experimental animal models of aneurysmal subarachnoid hemorrhage (SAH) have provided a wealth of information on the mechanisms of brain injury. The Rat endovascular perforation model (EVP) replicates the early pathophysiology of SAH and hence is frequently used to study early brain injury following SAH. This paper presents a brief review of historical development of the EVP model, details the technique used to create SAH and considerations necessary to overcome technical challenges. PMID:25213427

  18. The XXC models

    NASA Astrophysics Data System (ADS)

    Maassarani, Z.

    1998-07-01

    A class of recently introduced multi-states XX models is generalized to include a deformation parameter. This corresponds to an additional nearest-neighbor CC interaction in the defining quadratic Hamiltonian. Complete integrability of the one-dimensional models is shown in the context of the quantum inverse scattering method. The new R-matrix is derived. The diagonalization of the XXC models is carried out using the algebraic Bethe ansatz.

  19. HOMER® Micropower Optimization Model

    SciTech Connect

    Lilienthal, P.

    2005-01-01

    NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

  20. Solid model design simplification

    SciTech Connect

    Ames, A.L.; Rivera, J.J.; Webb, A.J.; Hensinger, D.M.

    1997-12-01

    This paper documents an investigation of approaches to improving the quality of Pro/Engineer-created solid model data for use by downstream applications. The investigation identified a number of sources of problems caused by deficiencies in Pro/Engineer`s geometric engine, and developed prototype software capable of detecting many of these problems and guiding users towards simplified, useable models. The prototype software was tested using Sandia production solid models, and provided significant leverage in attacking the simplification problem.

  1. Conceptual IT model

    NASA Astrophysics Data System (ADS)

    Arnaoudova, Kristina; Stanchev, Peter

    2015-11-01

    The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.

  2. Modeling Frequency Comb Sources

    NASA Astrophysics Data System (ADS)

    Li, Feng; Yuan, Jinhui; Kang, Zhe; Li, Qian; Wai, P. K. A.

    2016-06-01

    Frequency comb sources have revolutionized metrology and spectroscopy and found applications in many fields. Stable, low-cost, high-quality frequency comb sources are important to these applications. Modeling of the frequency comb sources will help the understanding of the operation mechanism and optimization of the design of such sources. In this paper,we review the theoretical models used and recent progress of the modeling of frequency comb sources.

  3. The Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles; Saile, Lynn; Freiere deCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David

    2010-01-01

    The goals of the Integrated Medical Model (IMM) are to develop an integrated, quantified, evidence-based decision support tool useful to crew health and mission planners and to help align science, technology, and operational activities intended to optimize crew health, safety, and mission success. Presentation slides address scope and approach, beneficiaries of IMM capabilities, history, risk components, conceptual models, development steps, and the evidence base. Space adaptation syndrome is used to demonstrate the model's capabilities.

  4. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  5. Multidimensional reactor kinetics modeling

    SciTech Connect

    Diamond, D.J.

    1996-11-01

    There is general agreement that for many light water reactor transient calculations, it is-necessary to use a multidimensional neutron kinetics model coupled to a thermal-hydraulics model for satisfactory results. These calculations are needed for a variety of applications for licensing safety analysis, probabilistic risk assessment (PRA), operational support, and training. The latter three applications have always required best-estimate models, but in the past applications for licensing could be satisfied with relatively simple models. By using more sophisticated best-estimate models, the consequences of these calculations are better understood, and the potential for gaining relief from restrictive operating limits increases. Hence, for all of the aforementioned applications, it is important to have the ability to do best-estimate calculations with multidimensional neutron kinetics models. coupled to sophisticated thermal-hydraulic models. Specifically, this paper reviews the status of multidimensional neutron kinetics modeling which would be used in conjunction with thermal-hydraulic models to do core dynamics calculations, either coupled to a complete NSSS representation or in isolation. In addition, the paper makes recommendations as to what should be the state-of-the-art for the next ten years. The review is an update to a previous review of the status as of ten years ago. The general requirements for a core dynamics code and the modeling available for such a code, discussed in that review, are still applicable. The emphasis in the current review is on the neutron kinetics assuming that the necessary thermal-hydraulic capability exists. In addition to discussing the basic neutron kinetics, discussion is given of related modeling (other than thermal- hydraulics). The capabilities and limitations of current computer codes are presented to understand the state-of-the-art and to help clarify the future direction of model development in this area.

  6. Photovoltaic array performance model.

    SciTech Connect

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  7. Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Wiegelmann, Thomas; Petrie, Gordon J. D.; Riley, Pete

    2015-07-01

    Coronal magnetic field models use photospheric field measurements as boundary condition to model the solar corona. We review in this paper the most common model assumptions, starting from MHD-models, magnetohydrostatics, force-free and finally potential field models. Each model in this list is somewhat less complex than the previous one and makes more restrictive assumptions by neglecting physical effects. The magnetohydrostatic approach neglects time-dependent phenomena and plasma flows, the force-free approach neglects additionally the gradient of the plasma pressure and the gravity force. This leads to the assumption of a vanishing Lorentz force and electric currents are parallel (or anti-parallel) to the magnetic field lines. Finally, the potential field approach neglects also these currents. We outline the main assumptions, benefits and limitations of these models both from a theoretical (how realistic are the models?) and a practical viewpoint (which computer resources to we need?). Finally we address the important problem of noisy and inconsistent photospheric boundary conditions and the possibility of using chromospheric and coronal observations to improve the models.

  8. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  9. Multifamily Envelope Leakage Model

    SciTech Connect

    Faakye, O.; Griffiths, D.

    2015-05-01

    The objective of the 2013 research project was to develop the model for predicting fully guarded test results (FGT), using unguarded test data and specific building features of apartment units. The model developed has a coefficient of determination R2 value of 0.53 with a root mean square error (RMSE) of 0.13. Both statistical metrics indicate that the model is relatively strong. When tested against data that was not included in the development of the model, prediction accuracy was within 19%, which is reasonable given that seasonal differences in blower door measurements can vary by as much as 25%.

  10. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  11. Modeling plant morphogenesis.

    PubMed

    Prusinkiewicz, Przemyslaw; Rolland-Lagan, Anne-Gaëlle

    2006-02-01

    Applications of computational techniques to developmental plant biology include the processing of experimental data and the construction of simulation models. Substantial progress has been made in these areas over the past few years. Complex image-processing techniques are used to integrate sequences of two-dimensional images into three-dimensional descriptions of development over time and to extract useful quantitative traits. Large amounts of data are integrated into empirical models of developing plant organs and entire plants. Mechanistic models link molecular-level phenomena with the resulting phenotypes. Several models shed light on the possible properties of active auxin transport and its role in plant morphogenesis. PMID:16376602

  12. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  13. Theory of Chemical Modeling

    NASA Astrophysics Data System (ADS)

    Kühn, Michael

    In order to deal with the complexity of natural systems simplified models are employed to illustrate the principal and regulatory factors controlling a chemical system. Following the aphorism of Albert Einstein: Everything should be made as simple as possible, but not simpler, models need not to be completely realistic to be useful (Stumm and Morgan 1996), but need to meet a successful balance between realism and practicality. Properly constructed, a model is neither too simplified that it is unrealistic nor too detailed that it cannot be readily evaluated and applied to the problem of interest (Bethke 1996). The results of a model have to be at least partially observable or experimentally verifiable (Zhu and Anderson 2002). Geochemical modeling theories are presented here in a sequence of increasing complexity from geochemical equilibrium models to kinetic, reaction path, and finally coupled transport and reaction models. The description is far from complete but provides the needs for the set up of reactive transport models of hydrothermal systems as done within subsequent chapters. Extensive reviews of geochemical models in general can be found in the literature (Appelo and Postma 1999, Bethke 1996, Melchior and Bassett 1990, Nordstrom and Ball 1984, Paschke and van der Heijde 1996).

  14. Lightning return stroke models

    NASA Technical Reports Server (NTRS)

    Lin, Y. T.; Uman, M. A.; Standler, R. B.

    1980-01-01

    We test the two most commonly used lightning return stroke models, Bruce-Golde and transmission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations to the measured two-station fields. Using the new model, we derive return stroke charge and current statistics for about 100 subsequent strokes.

  15. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  16. DISJUNCTIVE NORMAL SHAPE MODELS

    PubMed Central

    Ramesh, Nisha; Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2016-01-01

    A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters. PMID:27403233

  17. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  18. Kp forecast models

    NASA Astrophysics Data System (ADS)

    Wing, S.; Johnson, J. R.; Jen, J.; Meng, C.-I.; Sibeck, D. G.; Bechtold, K.; Freeman, J.; Costello, K.; Balikhin, M.; Takahashi, K.

    2005-04-01

    Magnetically active times, e.g., Kp > 5, are notoriously difficult to predict, precisely the times when such predictions are crucial to the space weather users. Taking advantage of the routinely available solar wind measurements at Langrangian point (L1) and nowcast Kps, Kp forecast models based on neural networks were developed with the focus on improving the forecast for active times. To satisfy different needs and operational constraints, three models were developed: (1) a model that inputs nowcast Kp and solar wind parameters and predicts Kp 1 hour ahead; (2) a model with the same input as model 1 and predicts Kp 4 hour ahead; and (3) a model that inputs only solar wind parameters and predicts Kp 1 hour ahead (the exact prediction lead time depends on the solar wind speed and the location of the solar wind monitor). Extensive evaluations of these models and other major operational Kp forecast models show that while the new models can predict Kps more accurately for all activities, the most dramatic improvements occur for moderate and active times. Information dynamics analysis of Kp suggests that geospace is more dominated by internal dynamics near solar minimum than near solar maximum, when it is more directly driven by external inputs, namely solar wind and interplanetary magnetic field (IMF).

  19. Kp forecast models

    NASA Astrophysics Data System (ADS)

    Wing, S.; Johnson, J. R.; Meng, C.; Takahashi, K.

    2005-05-01

    Magnetically active times, e.g., Kp > 5, are notoriously difficult to predict, precisely the times when such predictions are crucial to the space weather users. Taking advantage of the routinely available solar wind measurements at Langrangian point (L1) and nowcast Kps, Kp forecast models based on neural networks were developed with the focus on improving the forecast for active times. To satisfy different needs and operational constraints, three models were developed: (1) a model that inputs nowcast Kp and solar wind parameters and predicts Kp 1 hr ahead; (2) a model with the same input as model 1 and predicts Kp 4 hr ahead; and (3) a model that inputs only solar wind parameters and predicts Kp 1 hr ahead (the exact prediction lead time depends on the solar wind speed and the location of the solar wind monitor.) Extensive evaluations of these models and other major operational Kp forecast models show that, while the new models can predict Kps more accurately for all activities, the most dramatic improvements occur for moderate and active times. Information dynamics analysis of Kp, suggests that geospace is more dominated by internal dynamics near solar minimum than near solar maximum, when it is more directly driven by external inputs, namely solar wind and interplanetary magnetic field (IMF).

  20. Models of Reality.

    SciTech Connect

    Brown-VanHoozer, S. A.

    1999-06-02

    Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.

  1. The LISA Integrated Model

    NASA Technical Reports Server (NTRS)

    Merkowitz, Stephen M.

    2002-01-01

    The Laser Interferometer Space Antenna (LISA) space mission has unique needs that argue for an aggressive modeling effort. These models ultimately need to forecast and interrelate the behavior of the science input, structure, optics, control systems, and many other factors that affect the performance of the flight hardware. In addition, many components of these integrated models will also be used separately for the evaluation and investigation of design choices, technology development and integration and test. This article presents an overview of the LISA integrated modeling effort.

  2. Risk factors for increased BTEX exposure in four Australian cities.

    PubMed

    Hinwood, Andrea L; Rodriguez, Clemencia; Runnion, Tina; Farrar, Drew; Murray, Frank; Horton, Anthony; Glass, Deborah; Sheppeard, Vicky; Edwards, John W; Denison, Lynnette; Whitworth, Tom; Eiser, Chris; Bulsara, Max; Gillett, Rob W; Powell, Jenny; Lawson, S; Weeks, Ian; Galbally, Ian

    2007-01-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) are common volatile organic compounds (VOCs) found in urban airsheds. Elevated levels of VOCs have been reported in many airsheds at many locations, particularly those associated with industrial activity, wood heater use and heavy traffic. Exposure to some VOCs has been associated with health risks. There have been limited investigations into community exposures to BTEX using personal monitoring to elucidate the concentrations to which members of the community may be exposed and the main contributors to that exposure. In this cross sectional study we investigated BTEX exposure of 204 non-smoking, non-occupationally exposed people from four Australian cities. Each participant wore a passive BTEX sampler over 24h on five consecutive days in both winter and summer and completed an exposure source questionnaire for each season and a diary for each day of monitoring. The geometric mean (GM) and range of daily BTEX concentrations recorded for the study population were benzene 0.80 (0.04-23.8 ppb); toluene 2.83 (0.03-2120 ppb); ethylbenzene 0.49 (0.03-119 ppb); and xylenes 2.36 (0.04-697 ppb). A generalised linear model was used to investigate significant risk factors for increased BTEX exposure. Activities and locations found to increase personal exposure included vehicle repair and machinery use, refuelling of motor vehicles, being in an enclosed car park and time spent undertaking arts and crafts. A highly significant difference was found between the mean exposures in each of the four cities, which may be explained by differences in fuel composition, differences in the mix and density of industry, density of motor vehicles and air pollution meteorology. PMID:16837022

  3. Determining air quality and greenhouse gas impacts of hydrogen infrastructure and fuel cell vehicles.

    PubMed

    Stephens-Romero, Shane; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald; Samuelsen, Scott

    2009-12-01

    Adoption of hydrogen infrastructure and hydrogen fuel cell vehicles (HFCVs) to replace gasoline internal combustion engine (ICE) vehicles has been proposed as a strategy to reduce criteria pollutant and greenhouse gas (GHG) emissions from the transportation sector and transition to fuel independence. However, it is uncertain (1) to what degree the reduction in criteria pollutants will impact urban air quality, and (2) how the reductions in pollutant emissions and concomitant urban air quality impacts compare to ultralow emission gasoline-powered vehicles projected for a future year (e.g., 2060). To address these questions, the present study introduces a "spatially and temporally resolved energy and environment tool" (STREET) to characterize the pollutant and GHG emissions associated with a comprehensive hydrogen supply infrastructure and HFCVs at a high level of geographic and temporal resolution. To demonstrate the utility of STREET, two spatially and temporally resolved scenarios for hydrogen infrastructure are evaluated in a prototypical urban airshed (the South Coast Air Basin of California) using geographic information systems (GIS) data. The well-to-wheels (WTW) GHG emissions are quantified and the air quality is established using a detailed atmospheric chemistry and transport model followed by a comparison to a future gasoline scenario comprised of advanced ICE vehicles. One hydrogen scenario includes more renewable primary energy sources for hydrogen generation and the other includes more fossil fuel sources. The two scenarios encompass a variety of hydrogen generation, distribution, and fueling strategies. GHG emissions reductions range from 61 to 68% for both hydrogen scenarios in parallel with substantial improvements in urban air quality (e.g., reductions of 10 ppb in peak 8-h-averaged ozone and 6 mug/m(3) in 24-h-averaged particulate matter concentrations, particularly in regions of the airshed where concentrations are highest for the gasoline scenario

  4. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  5. Groundwater Model Validation

    SciTech Connect

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  6. Why business models matter.

    PubMed

    Magretta, Joan

    2002-05-01

    "Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance. PMID:12024761

  7. Biosphere Process Model Report

    SciTech Connect

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  8. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  9. Spiral model pilot project information model

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

  10. Model Minority Stereotype Reconsidered.

    ERIC Educational Resources Information Center

    Kobayashi, Futoshi

    This paper explores the origin and historical background of the "model minority" stereotype. It includes evidence illustrating problems resulting from the artificial grouping of Asian Americans as one ethnic group and the stereotype's influence on young Asian Americans. In the 1960s, the U.S. media began to portray the model minority through…

  11. Modelling with Magnets.

    ERIC Educational Resources Information Center

    Gabel, Dorothy; And Others

    1992-01-01

    Chemistry can be described on three levels: sensory, molecular, and symbolic. Proposes a particle approach to teaching chemistry that uses magnets to aid students construct molecular models and solve particle problems. Includes examples of Johnstone's model of chemistry phenomena, a problem worksheet, and a student concept mastery sheet. (MDH)

  12. Modeling for Insights

    SciTech Connect

    Jacob J. Jacobson; Gretchen Matthern

    2007-04-01

    System Dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, System Dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The real power of System Dynamic modeling is gaining insights into total system behavior as time, and system parameters are adjusted and the effects are visualized in real time. System Dynamic models allow decision makers and stakeholders to explore long-term behavior and performance of complex systems, especially in the context of dynamic processes and changing scenarios without having to wait decades to obtain field data or risk failure if a poor management or design approach is used. The Idaho National Laboratory recently has been developing a System Dynamic model of the US Nuclear Fuel Cycle. The model is intended to be used to identify and understand interactions throughout the entire nuclear fuel cycle and suggest sustainable development strategies. This paper describes the basic framework of the current model and presents examples of useful insights gained from the model thus far with respect to sustainable development of nuclear power.

  13. SUSY GUT Model Building

    SciTech Connect

    Raby, Stuart

    2008-11-23

    In this talk I discuss the evolution of SUSY GUT model building as I see it. Starting with 4 dimensional model building, I then consider orbifold GUTs in 5 dimensions and finally orbifold GUTs embedded into the E{sub 8}xE{sub 8} heterotic string.

  14. Erosion by Wind: Modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models of wind erosion are used to investigate fundamental processes and guide resource management. Many models are similar in that - temporal variables control soil wind erodibility; erosion begins when friction velocity exceeds a threshold; and transport capacity for saltation/creep is proportion...

  15. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  16. Models in Biology.

    ERIC Educational Resources Information Center

    Flannery, Maura C.

    1997-01-01

    Addresses the most popular models currently being chosen for biological research and the reasons behind those choices. Among the current favorites are zebra fish, fruit flies, mice, monkeys, and yeast. Concludes with a brief examination of the ethical issues involved, and why some animals may need to be replaced in research with model systems.…

  17. Models and Metaphors

    ERIC Educational Resources Information Center

    Ivie, Stanley D.

    2007-01-01

    Humanity delights in spinning conceptual models of the world. These models, in turn, mirror their respective root metaphors. Three root metaphors--spiritual, organic, and mechanical--have dominated western thought. The spiritual metaphor runs from Plato, through Hegel, and connects with Montessori. The organic metaphor extends from Aristotle,…

  18. PHOTOCHEMICAL BOX MODEL (PBM)

    EPA Science Inventory

    This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the Photochemical Box Model (PBM). The PBM is a simple stationary single-cell model with a variable height lid designed to provide volume-integrated hour averages of O3 and other ph...

  19. Flowfield modeling and diagnostics

    SciTech Connect

    Gupta, A.K.; Lilley, D.G.

    1985-01-01

    This textbook is devoted solely to flowfield modeling and diagnostics; their practical use, recent and current research, and projected developments and trends. It provides an account of the use of a broad range of techniques in industrial and research practice, both with and without combustion. Application ideas are complemented by details about experimental and modeling techniques.

  20. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  1. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  2. Updating Situation Models

    ERIC Educational Resources Information Center

    Zwaan, Rolf A.; Madden, Carol J.

    2004-01-01

    The authors examined how situation models are updated during text comprehension. If comprehenders keep track of the evolving situation, they should update their models such that the most current information, the here and now, is more available than outdated information. Contrary to this updating hypothesis, E. J. O'Brien, M. L. Rizzella, J. E.…

  3. Canister Model, Systems Analysis

    1993-09-29

    This packges provides a computer simulation of a systems model for packaging nuclear waste and spent nuclear fuel in canisters. The canister model calculates overall programmatic cost, number of canisters, and fuel and waste inventories for the Idaho Chemical Processing Plant (other initial conditions can be entered).

  4. Acid rain: Microphysical model

    NASA Technical Reports Server (NTRS)

    Dingle, A. N.

    1980-01-01

    A microphysical model was used to simulate the case of a ground cloud without dilution by entrainment and without precipitation. The numerical integration techniques of the model are presented. The droplet size spectra versus time and the droplet molalities for each value of time are discussed.

  5. Model Cities Training Program.

    ERIC Educational Resources Information Center

    Tennessee Univ., Chattanooga.

    The Model Cities Training Program, the first in the country, is a 10-session course to be conducted in seminar form under the direction of the University of Tennessee at Chattanooga. The objective is to enable the 50 members of the Community Development Administration Board of Directors to: acquire knowledge of the structure of the Model Cities…

  6. Connectionist Modelling and Education.

    ERIC Educational Resources Information Center

    Evers, Colin W.

    2000-01-01

    Provides a detailed, technical introduction to the state of cognitive science research, in particular the rise of the "new cognitive science," especially artificial neural net (ANN) models. Explains one influential ANN model and describes diverse applications and their implications for education. (EV)

  7. Animal models for osteoporosis.

    PubMed

    Turner, R T; Maran, A; Lotinun, S; Hefferan, T; Evans, G L; Zhang, M; Sibonga, J D

    2001-01-01

    Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge. PMID:11704974

  8. Solar Atmosphere Models

    NASA Astrophysics Data System (ADS)

    Rutten, R. J.

    2002-12-01

    This contribution honoring Kees de Jager's 80th birthday is a review of "one-dimensional" solar atmosphere modeling that followed on the initial "Utrecht Reference Photosphere" of Heintze, Hubenet & de Jager (1964). My starting point is the Bilderberg conference, convened by de Jager in 1967 at the time when NLTE radiative transfer theory became mature. The resulting Bilderberg model was quickly superseded by the HSRA and later by the VAL-FAL sequence of increasingly sophisticated NLTE continuum-fitting models from Harvard. They became the "standard models" of solar atmosphere physics, but Holweger's relatively simple LTE line-fitting model still persists as a favorite of solar abundance determiners. After a brief model inventory I discuss subsequent work on the major modeling issues (coherency, NLTE, dynamics) listed as to-do items by de Jager in 1968. The present conclusion is that one-dimensional modeling recovers Schwarzschild's (1906) finding that the lower solar atmosphere is grosso modo in radiative equilibrium. This is a boon for applications regarding the solar atmosphere as one-dimensional stellar example - but the real sun, including all the intricate phenomena that now constitute the mainstay of solar physics, is vastly more interesting.

  9. Modelling Hadronic Matter

    NASA Astrophysics Data System (ADS)

    Menezes, Débora P.

    2016-04-01

    Hadron physics stands somewhere in the diffuse intersection between nuclear and particle physics and relies largely on the use of models. Historically, around 1930, the first nuclear physics models known as the liquid drop model and the semi-empirical mass formula established the grounds for the study of nuclei properties and nuclear structure. These two models are parameter dependent. Nowadays, around 500 hundred non-relativistic (Skyrme-type) and relativistic models are available in the literature and largely used and the vast majority are parameter dependent models. In this review I discuss some of the shortcomings of using non-relativistic models and the advantages of using relativistic ones when applying them to describe hadronic matter. I also show possible applications of relativistic models to physical situations that cover part of the QCD phase diagram: I mention how the description of compact objects can be done, how heavy-ion collisions can be investigated and particle fractions obtained and show the relation between liquid-gas phase transitions and the pasta phase.

  10. Pathological Gambling: Psychiatric Models

    ERIC Educational Resources Information Center

    Westphal, James R.

    2008-01-01

    Three psychiatric conceptual models: addictive, obsessive-compulsive spectrum and mood spectrum disorder have been proposed for pathological gambling. The objectives of this paper are to (1) evaluate the evidence base from the most recent reviews of each model, (2) update the evidence through 2007 and (3) summarize the status of the evidence for…

  11. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  12. STREAM WATER QUALITY MODEL

    EPA Science Inventory

    QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987). Q2K is similar to Q2E in the following respects:

    • One dimensional. The channel is well-mixed vertically a...

    • Fictional models in science

      NASA Astrophysics Data System (ADS)

      Morrison, Margaret

      2014-02-01

      When James Clerk Maxwell set out his famous equations 150 years ago, his model of electromagnetism included a piece of pure fiction: an invisible, all-pervasive "aether" made up of elastic vortices separated by electric charges. Margaret Morrison explores how this and other "fictional" models shape science.

    • Composite Load Model Evaluation

      SciTech Connect

      Lu, Ning; Qiao, Hong

      2007-09-30

      The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.

    • HYBRID RECEPTOR MODELS

      EPA Science Inventory

      A hybrid receptor model is a specified mathematical procedure which uses not only the ambient species concentration measurements that form the input data for a pure receptor model, but in addition source emission rates or atmospheric dispersion or transformation information chara...

    • The Leadership Training Model.

      ERIC Educational Resources Information Center

      Parker, Jeanette P.

      1983-01-01

      The article proposes a model for developing leadership among gifted students. Four components of the model are identified and sample subskills described: cognition (exploration, research); problem solving (incubation, creative thinking); interpersonal communication (self realization, cooperation, conflict resolution); and decision making skills…

    • Animal models for osteoporosis

      NASA Technical Reports Server (NTRS)

      Turner, R. T.; Maran, A.; Lotinun, S.; Hefferan, T.; Evans, G. L.; Zhang, M.; Sibonga, J. D.

      2001-01-01

      Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge.

    • Structural Equation Model Trees

      ERIC Educational Resources Information Center

      Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

      2013-01-01

      In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

    • Modeling Water Filtration

      ERIC Educational Resources Information Center

      Parks, Melissa

      2014-01-01

      Model-eliciting activities (MEAs) are not new to those in engineering or mathematics, but they were new to Melissa Parks. Model-eliciting activities are simulated real-world problems that integrate engineering, mathematical, and scientific thinking as students find solutions for specific scenarios. During this process, students generate solutions…

    • Computational Modeling of Tires

      NASA Technical Reports Server (NTRS)

      Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)

      1995-01-01

      This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.

  1. AGRICULTURAL SIMULATION MODEL (AGSIM)

    EPA Science Inventory

    AGSIM is a large-scale econometric simulation model of regional crop and national livestock production in the United States. The model was initially developed to analyze the aggregate economic impacts of a wide variety issues facing agriculture, such as technological change, pest...

  2. Generalized gamma frailty model.

    PubMed

    Balakrishnan, N; Peng, Yingwei

    2006-08-30

    In this article, we present a frailty model using the generalized gamma distribution as the frailty distribution. It is a power generalization of the popular gamma frailty model. It also includes other frailty models such as the lognormal and Weibull frailty models as special cases. The flexibility of this frailty distribution makes it possible to detect a complex frailty distribution structure which may otherwise be missed. Due to the intractable integrals in the likelihood function and its derivatives, we propose to approximate the integrals either by Monte Carlo simulation or by a quadrature method and then determine the maximum likelihood estimates of the parameters in the model. We explore the properties of the proposed frailty model and the computation method through a simulation study. The study shows that the proposed model can potentially reduce errors in the estimation, and that it provides a viable alternative for correlated data. The merits of proposed model are demonstrated in analysing the effects of sublingual nitroglycerin and oral isosorbide dinitrate on angina pectoris of coronary heart disease patients based on the data set in Danahy et al. (sustained hemodynamic and antianginal effect of high dose oral isosorbide dinitrate. Circulation 1977; 55:381-387). PMID:16220516

  3. Humane Education: A Model

    ERIC Educational Resources Information Center

    Dobson, Russell; And Others

    1976-01-01

    A two part hypothetical model of education incorporating basic beliefs of man with educational practice is presented for consideration by educators. Basic elements of the model include purpose, experience, formative evaluation, philosophy, knowledge, learning, goals, curriculum, instruction, and parental involvement. Journal may be ordered from…

  4. Stereolithography models. Final report

    SciTech Connect

    Smith, R.E.

    1995-03-01

    This report describes the first stereolithographic models made, which proved in a new release of ProEngineer software (Parametric Technologies, or PTC) and 3D Systems (Valencia, California) software for the SLA 250 machine. They are a model of benzene and the {alpha}-carbon backbone of the variable region of an antibody.

  5. Metaphorical Models in Chemistry.

    ERIC Educational Resources Information Center

    Rosenfeld, Stuart; Bhusan, Nalini

    1995-01-01

    What happens when students of chemistry fail to recognize the metaphorical status of certain models and interpret them literally? Suggests that such failures lead students to form perceptions of phenomena that can be misleading. Argues that the key to making good use of metaphorical models is a recognition of their metaphorical status. Examines…

  6. Modeling and simulation

    SciTech Connect

    Hanham, R.; Vogt, W.G.; Mickle, M.H.

    1986-01-01

    This book presents the papers given at a conference on computerized simulation. Topics considered at the conference included expert systems, modeling in electric power systems, power systems operating strategies, energy analysis, a linear programming approach to optimum load shedding in transmission systems, econometrics, simulation in natural gas engineering, solar energy studies, artificial intelligence, vision systems, hydrology, multiprocessors, and flow models.

  7. Postinstability models in elasticity

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1984-01-01

    It is demonstrated that the instability caused by the failure of hyperbolicity in elasticity and associated with the problem of unpredictability in classical mechanics expresses the incompleteness of the original model of an elastic medium. The instability as well as the ill-posedness of the Cauchy problem are eliminated by reformulating the original model.

  8. Foundations of Biomolecular Modeling

    PubMed Central

    Jorgensen, William L.

    2014-01-01

    The 2013 Nobel Prize in Chemistry has been awarded to Martin Kaplus, Michael Levitt, and Arieh Warshel for “Development of Multiscale Models for Complex Chemical Systems”. The honored work from the 1970s has provided a foundation for the widespread activities today in modeling organic and biomolecular systems. PMID:24315087

  9. Model State Efforts.

    ERIC Educational Resources Information Center

    Morgan, Gwen

    Models of state involvement in training child care providers are briefly discussed and the employers' role in training is explored. Six criteria for states that are taken as models are identified, and four are described. Various state activities are described for each criterion. It is noted that little is known about employer and other private…

  10. Mathematical models of hysteresis

    SciTech Connect

    1998-08-01

    The ongoing research has largely been focused on the development of mathematical models of hysteretic nonlinearities with nonlocal memories. The distinct feature of these nonlinearities is that their current states depend on past histories of input variations. It turns out that memories of hysteretic nonlinearities are quite selective. Indeed, experiments show that only some past input extrema (not the entire input variations) leave their marks upon future states of hysteretic nonlinearities. Thus special mathematical tools are needed in order to describe nonlocal selective memories of hysteretic nonlinearities. The origin of such tools can be traced back to the landmark paper of Preisach. Their research has been primarily concerned with Preisach-type models of hysteresis. All these models have a common generic feature; they are constructed as superpositions of simplest hysteretic nonlinearities-rectangular loops. During the past four years, the study has been by and large centered around the following topics: (1) further development of Scalar and vector Preisach-type models of hysteresis; (2) experimental testing of Preisach-type models of hysteresis; (3) development of new models for viscosity (aftereffect) in hysteretic systems; (4) development of mathematical models for superconducting hysteresis in the case of gradual resistive transitions; (5) software implementation of Preisach-type models of hysteresis; and (6) development of new ideas which have emerged in the course of the research work. The author briefly describes the main scientific results obtained in the areas outlined above.

  11. Model-Based Reasoning

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  12. A night sky model.

    NASA Astrophysics Data System (ADS)

    Erpylev, N. P.; Smirnov, M. A.; Bagrov, A. V.

    A night sky model is proposed. It includes different components of light polution, such as solar twilight, moon scattered light, zodiacal light, Milky Way, air glow and artificial light pollution. The model is designed for calculating the efficiency of astronomical installations.

  13. Modeling HIV Cure

    NASA Astrophysics Data System (ADS)

    Perelson, Alan; Conway, Jessica; Cao, Youfang

    A large effort is being made to find a means to cure HIV infection. I will present a dynamical model of post-treatment control (PTC) or ``functional cure'' of HIV-infection. Some patients treated with suppressive antiviral therapy have been taken off of therapy and then spontaneously control HIV infection such that the amount of virus in the circulation is maintained undetectable by clinical assays for years. The model explains PTC occurring in some patients by having a parameter regime in which the model exhibits bistability, with both a low and high steady state viral load being stable. The model makes a number of predictions about how to attain the low PTC steady state. Bistability in this model depends upon the immune response becoming exhausted when over stimulated. I will also present a generalization of the model in which immunotherapy can be used to reverse immune exhaustion and compare model predictions with experiments in SIV infected macaques given immunotherapy and then taken off of antiretroviral therapy. Lastly, if time permits, I will discuss one of the hurdles to true HIV eradication, latently infected cells, and present clinical trial data and a new model addressing pharmacological means of flushing out the latent reservoir. Supported by NIH Grants AI028433 and OD011095.

  14. Prewhirl Jet Model

    NASA Technical Reports Server (NTRS)

    Meng, S. Y.; Jensen, M.; Jackson, E. D.

    1985-01-01

    Simple accurate model of centrifugal or rocket engine pumps provides information necessary to design inducer backflow deflector, backflow eliminator and prewhirl jet in jet mixing zones. Jet design based on this model shows improvement in inducer suction performance and reduced cavitation damage.

  15. MULTIMEDIA EXPOSURE MODELING

    EPA Science Inventory

    This task addresses a number of issues that arise in multimedia modeling with an emphasis on interactions among the atmosphere and multiple other environmental media. Approaches for working with multiple types of models and the data sets are being developed. Proper software tool...

  16. Using Models Effectively

    ERIC Educational Resources Information Center

    Eichinger, John

    2005-01-01

    Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…

  17. SOSS ICN Model Validation

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan

    2016-01-01

    Under the NASA-KAIA-KARI ATM research collaboration agreement, SOSS ICN Model has been developed for Incheon International Airport. This presentation describes the model validation work in the project. The presentation will show the results and analysis of the validation.

  18. MODELING THE AMES TEST

    EPA Science Inventory

    Despite the value and widespread use of the Ames test, little attention has been focused on standardizing quantitative methods of analyzing these data. In this paper, a realistic and statistically tractable model is developed for the evaluation of Ames-type data. The model assume...

  19. VENTURI SCRUBBER PERFORMANCE MODEL

    EPA Science Inventory

    The paper presents a new model for predicting the particle collection performance of venturi scrubbers. It assumes that particles are collected by atomized liquid only in the throat section. The particle collection mechanism is inertial impaction, and the model uses a single drop...

  20. Modeling Carbon Exchange

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Model results will be reviewed to assess different methods for bounding the terrestrial role in the global carbon cycle. It is proposed that a series of climate model runs could be scoped that would tighten the limits on the "missing sink" of terrestrial carbon and could also direct future satellite image analyses to search for its geographical location and understand its seasonal dynamics.

  1. Warm Inflation Model Building

    NASA Astrophysics Data System (ADS)

    Bastero-Gil, Mar; Berera, Arjun

    We review the main aspects of the warm inflation scenario, focusing on the inflationary dynamics and the predictions related to the primordial spectrum of perturbations, to be compared with the recent cosmological observations. We study in detail three different classes of inflationary models, chaotic, hybrid models and hilltop models, and discuss their embedding into supersymmetric models and the consequences for model building of the warm inflationary dynamics based on first principles calculations. Due to the extra friction term introduced in the inflaton background evolution generated by the dissipative dynamics, inflation can take place generically for smaller values of the field, and larger values of couplings and masses. When the dissipative dynamics dominates over the expansion, in the so-called strong dissipative regime, inflation proceeds with sub-Planckian inflaton values. Models can be naturally embedded into a supergravity framework, with SUGRA corrections suppressed by the Planck mass now under control, for a larger class of Kähler potentials. In particular, this provides a simpler solution to the "eta" problem in supersymmetric hybrid inflation, without restricting the Kähler potentials compatible with inflation. For chaotic models dissipation leads to a smaller prediction for the tensor-to-scalar ratio and a less tilted spectrum when compared to the cold inflation scenario. We find in particular that a small component of dissipation renders the quartic model now consistent with the current CMB data.

  2. Modeling and Interrogative Strategies.

    ERIC Educational Resources Information Center

    Denney, Douglas R.

    Three studies to determine the effects of adult models on interrogative strategies of children (ages 6-11) are reviewed. Two issues are analyzed: (1) the comparative effectiveness of various types of modeling procedures for changing rule-governed behaviors, and (2) the interaction between observational learning and the developmental level of the…

  3. Modelling University Governance

    ERIC Educational Resources Information Center

    Trakman, Leon

    2008-01-01

    Twentieth century governance models used in public universities are subject to increasing doubt across the English-speaking world. Governments question if public universities are being efficiently governed; if their boards of trustees are adequately fulfilling their trust obligations towards multiple stakeholders; and if collegial models of…

  4. Dynamical models of happiness.

    PubMed

    Sprott, J C

    2005-01-01

    A sequence of models for the time evolution of one's happiness in response to external events is described. These models with added nonlinearities can produce stable oscillations and chaos even without external events. Potential implications for psychotherapy and a personal approach to life are discussed. PMID:15629066

  5. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  6. Ionospheric modelling for navigation

    NASA Astrophysics Data System (ADS)

    Aragon Angel, M. A.

    Signals transmitted to and from satellites for communication and navigation purposes must pass through the ionosphere Ionospheric irregularities most common at equatorial latitudes although they could occur anywhere can have a major impact on system performance and reliability and commercial navigation service satellite-based providers need to account for their effects For a GNSS single-frequency receiver the Slant Total Electron Content STEC must be known by the user through broadcast corrections In this context there are several sets of broadcast parameters that can be defined to take into account this ionospheric term The chosen model to generate the ionospheric correction coefficients for the present study is the NeQuick model although with a number of adaptations intended to improve effective ionospheric effect modelling performances The aim of this study is to describe a possible adaptation to the NeQuick model for real time purposes and suitable for single frequency users Therefore it will be necessary to determine the performance of this modified NeQuick model in correcting the ionospheric delay In order to generate the ionospheric corrections for single frequency receivers using the NeQuick model a certain approach should be followed to adapt the performance of NeQuick since this model was originally developed to provide TEC using averaged monthly information of the solar activity and not daily one Thus to use NeQuick for real time applications as an ionospheric broadcasted model such as Klobuchar solar daily information at the user point

  7. A Model for Implementation.

    ERIC Educational Resources Information Center

    O'Connor-Petruso, Sharon Anne

    2003-01-01

    Describes the Constructural Multi-Modalities Model for MST (math, science, and technology) Inquiry Units. The MST Model uses an interdisciplinary and constructivist approach and allows teachers to create lesson plans that: integrate MST in tandem; adhere to local, state, and national standards; and actively engage students' differentiated learning…

  8. Dual-Schemata Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tadahiro; Sawaragi, Tetsuo

    In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.

  9. THE AQUATOX MODEL

    EPA Science Inventory

    This lecture will present AQUATOX, an aquatic ecosystem simulation model developed by Dr. Dick Park and supported by the U.S. EPA. The AQUATOX model predicts the fate of various pollutants, such as nutrients and organic chemicals, and their effects on the ecosystem, including fi...

  10. Modeling Antibody Diversity.

    ERIC Educational Resources Information Center

    Baker, William P.; Moore, Cathy Ronstadt

    1998-01-01

    Understanding antibody structure and function is difficult for many students. The rearrangement of constant and variable regions during antibody differentiation can be effectively simulated using a paper model. Describes a hands-on laboratory exercise which allows students to model antibody diversity using readily available resources. (PVD)

  11. String Model Building

    SciTech Connect

    Raby, Stuart

    2010-02-10

    In this talk I review some recent progress in heterotic and F theory model building. I then consider work in progress attempting to find the F theory dual to a class of heterotic orbifold models which come quite close to the MSSM.

  12. Earth and ocean modeling

    NASA Technical Reports Server (NTRS)

    Knezovich, F. M.

    1976-01-01

    A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.

  13. ATMOSPHERIC MODEL DEVELOPMENT

    EPA Science Inventory

    This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...

  14. Bitzer's Model Reconstructed.

    ERIC Educational Resources Information Center

    Lybarger, Scott; Smith, Craig R.

    1996-01-01

    Reconstructs Lloyd Bitzer's situational model to serve as a guide for the generation of multiperspectival critical assessments of rhetorical discourse. Uses two of President Bush's speeches on the drug crisis to illustrate how the reconstructed model can account for such modern problems as multiple audiences, perceptions, and exigencies. (PA)

  15. Radiation risk estimation models

    SciTech Connect

    Hoel, D.G.

    1987-11-01

    Cancer risk models and their relationship to ionizing radiation are discussed. There are many model assumptions and risk factors that have a large quantitative impact on the cancer risk estimates. Other health end points such as mental retardation may be an even more serious risk than cancer for those with in utero exposures. 8 references.

  16. Model for Coastal Restoration

    SciTech Connect

    Thom, Ronald M.; Judd, Chaeli

    2007-07-27

    Successful restoration of wetland habitats depends on both our understanding of our system and our ability to characterize it. By developing a conceptual model, looking at different spatial scales and integrating diverse data streams: GIS datasets and NASA products, we were able to develop a dynamic model for site prioritization based on both qualitative and quantitative relationships found in the coastal environment.

  17. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  18. Multilevel Mixture Factor Models

    ERIC Educational Resources Information Center

    Varriale, Roberta; Vermunt, Jeroen K.

    2012-01-01

    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  19. Tacit Models and Infinity.

    ERIC Educational Resources Information Center

    Fischbein, Efraim

    2001-01-01

    Analyses several examples of tacit influences exerted by mental models on the interpretation of various mathematical concepts in the domain of actual infinity. Specifically addresses the unconscious effect of the figural-pictorial models of statements related to the infinite sets of geometrical points related to the concepts of function and…

  20. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  1. Modelling extended chromospheres

    NASA Technical Reports Server (NTRS)

    Linsky, J. L.

    1986-01-01

    Attention is given to the concept that the warm, partially ionized plasma (presently called chromosphere) associated with such stars as Alpha Boo and Rho Per extends outwards at least several photospheric radii. Calculations are presented for the Mg II K line in light of two input model atmospheres. Specific predictions are deduced from the results obtained by each of the two models.

  2. Enrollment Projection Model.

    ERIC Educational Resources Information Center

    Gustafson, B. Kerry; Hample, Stephen R.

    General documentation for the Enrollment Projection Model used by the Maryland Council for Higher Education (MCHE) is provided. The manual is directed toward both the potential users of the model as well as others interested in enrollment projections. The first four chapters offer administrators or planners insight into the derivation of the…

  3. [Predictive models for ART].

    PubMed

    Arvis, P; Guivarc'h-Levêque, A; Varlan, E; Colella, C; Lehert, P

    2013-02-01

    A predictive model is a mathematical expression estimating the probability of pregnancy, by combining predictive variables, or indicators. Its development requires three successive phases: formulation of the model, its validation--internal then external--and the impact study. Its performance is assessed by its discrimination and its calibration. Numerous models were proposed, for spontaneous pregnancies, IUI and IVF, but with rather poor results, and their external validation was seldom carried out and was mainly inconclusive. The impact study-consisting in ascertaining whether their use improves medical practice--was exceptionally done. The ideal ART predictive model is a "Center specific" model, helping physicians to choose between abstention, IUI and IVF, by providing a reliable cumulative rate of pregnancy for each option. This tool would allow to rationalize the practices, by avoiding premature, late, or hopeless treatments. The model would also allow to compare the performances between ART Centers based on objective criteria. Today the best solution is to adjust the existing models to one's own practice, by considering models validated with variables describing the treated population, whilst adjusting the calculation to the Center's performances. PMID:23182786

  4. Applied model validation

    NASA Astrophysics Data System (ADS)

    Davies, A. D.

    1985-07-01

    The NBS Center for Fire Research (CFR) conducts scientific research bearing on the fire safety of buildings, vehicles, tunnels and other inhabited structures. Data from controlled fire experiments are collected, analyzed and reduced to the analytical formulas that appear to underly the observed phenomena. These results and more general physical principles are then combined into models to predict the development of environments that may be hostile to humans. This is a progress report of an applied model validation case study. The subject model is Transport of Fire, Smoke and Gases (FAST). Products from a fire in a burn room exit through a connected corridor to outdoors. Cooler counterflow air in a lower layer feeds the fire. The model predicts corridor layer temperatures and thicknesses vs. time, given enclosure, fire and ambient specifications. Data have been collected from 38 tests using several fire sizes, but have not been reduced. Corresponding model results, and model and test documentation are yet to come. Considerable modeling and calculation is needed to convert instrument readings to test results comparable with model outputs so that residual differences may be determined.

  5. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  6. Animal models of tinnitus.

    PubMed

    Brozoski, Thomas J; Bauer, Carol A

    2016-08-01

    Presented is a thematic review of animal tinnitus models from a functional perspective. Chronic tinnitus is a persistent subjective sound sensation, emergent typically after hearing loss. Although the sensation is experientially simple, it appears to have central a nervous system substrate of unexpected complexity that includes areas outside of those classically defined as auditory. Over the past 27 years animal models have significantly contributed to understanding tinnitus' complex neurophysiology. In that time, a diversity of models have been developed, each with its own strengths and limitations. None has clearly become a standard. Animal models trace their origin to the 1988 experiments of Jastreboff and colleagues. All subsequent models derive some of their features from those experiments. Common features include behavior-dependent psychophysical determination, acoustic conditions that contrast objective sound and silence, and inclusion of at least one normal-hearing control group. In the present review, animal models have been categorized as either interrogative or reflexive. Interrogative models use emitted behavior under voluntary control to indicate hearing. An example would be pressing a lever to obtain food in the presence of a particular sound. In this type of model animals are interrogated about their auditory sensations, analogous to asking a patient, "What do you hear?" These models require at least some training and motivation management, and reflect the perception of tinnitus. Reflexive models, in contrast, employ acoustic modulation of an auditory reflex, such as the acoustic startle response. An unexpected loud sound will elicit a reflexive motor response from many species, including humans. Although involuntary, acoustic startle can be modified by a lower-level preceding event, including a silent sound gap. Sound-gap modulation of acoustic startle appears to discriminate tinnitus in animals as well as humans, and requires no training or

  7. Simplified Models for Dark Matter Model Building

    NASA Astrophysics Data System (ADS)

    DiFranzo, Anthony Paul

    The largest mass component of the universe is a longstanding mystery to the physics community. As a glaring source of new physics beyond the Standard Model, there is a large effort to uncover the quantum nature of dark matter. Many probes have been formed to search for this elusive matter; cultivating a rich environment for a phenomenologist. In addition to the primary probes---colliders, direct detection, and indirect detection---each with their own complexities, there is a plethora of prospects to illuminate our unanswered questions. In this work, phenomenological techniques for studying dark matter and other possible hints of new physics will be discussed. This work primarily focuses on the use of Simplified Models, which are intended to be a compromise between generality and validity of the theoretical description. They are often used to parameterize a particular search, develop a well-defined sense of complementarity between searches, or motivate new search strategies. Explicit examples of such models and how they may be used will be the highlight of each chapter.

  8. Turbulence Modeling: A NASA Perspective

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    2001-01-01

    This paper presents turbulence modeling from NASA's perspective. The topics include: 1) Hierarchy of Solution Methods; 2) Turbulence Modeling Focus; 3) Linear Eddy Viscosity Models; and 4) Nonlinear Eddy Viscosity Algebraic Stress Models.

  9. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  10. Australia's Next Top Fraction Model

    ERIC Educational Resources Information Center

    Gould, Peter

    2013-01-01

    Peter Gould suggests Australia's next top fraction model should be a linear model rather than an area model. He provides a convincing argument and gives examples of ways to introduce a linear model in primary classrooms.

  11. Staged Models for Interdisciplinary Research.

    PubMed

    Lafuerza, Luis F; Dyson, Louise; Edmonds, Bruce; McKane, Alan J

    2016-01-01

    Modellers of complex biological or social systems are often faced with an invidious choice: to use simple models with few mechanisms that can be fully analysed, or to construct complicated models that include all the features which are thought relevant. The former ensures rigour, the latter relevance. We discuss a method that combines these two approaches, beginning with a complex model and then modelling the complicated model with simpler models. The resulting "chain" of models ensures some rigour and relevance. We illustrate this process on a complex model of voting intentions, constructing a reduced model which agrees well with the predictions of the full model. Experiments with variations of the simpler model yield additional insights which are hidden by the complexity of the full model. This approach facilitated collaboration between social scientists and physicists-the complex model was specified based on the social science literature, and the simpler model constrained to agree (in core aspects) with the complicated model. PMID:27362836

  12. Modeling glacial climates

    NASA Technical Reports Server (NTRS)

    North, G. R.; Crowley, T. J.

    1984-01-01

    Mathematical climate modelling has matured as a discipline to the point that it is useful in paleoclimatology. As an example a new two dimensional energy balance model is described and applied to several problems of current interest. The model includes the seasonal cycle and the detailed land-sea geographical distribution. By examining the changes in the seasonal cycle when external perturbations are forced upon the climate system it is possible to construct hypotheses about the origin of midlatitude ice sheets and polar ice caps. In particular the model predicts a rather sudden potential for glaciation over large areas when the Earth's orbital elements are only slightly altered. Similarly, the drift of continents or the change of atmospheric carbon dioxide over geological time induces radical changes in continental ice cover. With the advance of computer technology and improved understanding of the individual components of the climate system, these ideas will be tested in far more realistic models in the near future.

  13. Modeling ocean circulation

    SciTech Connect

    Semtner, A.J.

    1995-09-08

    Ocean numerical models have become quite realistic over the past several years as a result of improved methods, faster computers, and global data sets. Models now treat basin-scale to global domains while retaining the fine spatial scales that are important for modeling the transport of heat, salt, and other properties over vast distances. Simulations are reproducing observed satellite results on the energetics of strong currents and are properly showing diverse aspects of thermodynamic and dynamic ocean responses ranging from deep-water production of El Nino. Now models can represent not only currents but also the consequences for climate, biology, and geo-chemistry over time spans for months to decades. However, much remains to be understood from models about ocean circulation on longer time scales, including the evolution of the dominant water masses, the predictability of climate, and the ocean`s influence on global change. 34 refs., 6 figs.

  14. Strength Modeling Report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.; Lee, P.; Wong, S.

    1985-01-01

    Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.

  15. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  16. Modeling earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  17. Cardiovascular modeling and diagnostics

    SciTech Connect

    Kangas, L.J.; Keller, P.E.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper, a novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  18. A Preliminary Jupiter Model

    NASA Astrophysics Data System (ADS)

    Hubbard, W. B.; Militzer, B.

    2016-03-01

    In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen-helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second- and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen-helium-rich envelope with approximately three times solar metallicity.

  19. Fuzzy object modeling

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Falcao, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Vaideeswaran, Pavithra; Mishra, Shipra; Grevera, George J.; Saboury, Babak; Torigian, Drew A.

    2011-03-01

    To make Quantitative Radiology (QR) a reality in routine clinical practice, computerized automatic anatomy recognition (AAR) becomes essential. As part of this larger goal, we present in this paper a novel fuzzy strategy for building bodywide group-wise anatomic models. They have the potential to handle uncertainties and variability in anatomy naturally and to be integrated with the fuzzy connectedness framework for image segmentation. Our approach is to build a family of models, called the Virtual Quantitative Human, representing normal adult subjects at a chosen resolution of the population variables (gender, age). Models are represented hierarchically, the descendents representing organs contained in parent organs. Based on an index of fuzziness of the models, 32 thorax data sets, and 10 organs defined in them, we found that the hierarchical approach to modeling can effectively handle the non-linear relationships in position, scale, and orientation that exist among organs in different patients.

  20. Integrated Environmental Control Model

    1999-09-03

    IECM is a powerful multimedia engineering software program for simulating an integrated coal-fired power plant. It provides a capability to model various conventional and advanced processes for controlling air pollutant emissions from coal-fired power plants before, during, or after combustion. The principal purpose of the model is to calculate the performance, emissions, and cost of power plant configurations employing alternative environmental control methods. The model consists of various control technology modules, which may be integratedmore » into a complete utility plant in any desired combination. In contrast to conventional deterministic models, the IECM offers the unique capability to assign probabilistic values to all model input parameters, and to obtain probabilistic outputs in the form of cumulative distribution functions indicating the likelihood of dofferent costs and performance results. A Graphical Use Interface (GUI) facilitates the configuration of the technologies, entry of data, and retrieval of results.« less