Science.gov

Sample records for aerosol-climate model pnnl-mmf

  1. Introducing the aerosol-climate model MAECHAM5-SAM2

    NASA Astrophysics Data System (ADS)

    Hommel, R.; Timmreck, C.; Graf, H. F.

    2009-04-01

    We are presenting a new global aerosol model MAECHAM5-SAM2 to study the aerosol dynamics in the UTLS under background and volcanic conditions. The microphysical core modul SAM2 treats the formation, the evolution and the transport of stratospheric sulphuric acid aerosol. The aerosol size distribution and the weight percentage of the sulphuric acid solution is calculated dependent on the concentrations of H2SO4 and H2O, their vapor pressures, the atmospheric temperature and pressure. The fixed sectional method is used to resolve an aerosol distribution between 1 nm and 2.6 micron in particle radius. Homogeneous nucleation, condensation and evaporation, coagulation, water-vapor growth, sedimentation and sulphur chemistry are included. The module is applied in the middle-atmosphere MAECHAM5 model, resolving the atmosphere up to 0.01 hPa (~80 km) in 39 layers. It is shown here that MAECHAM5-SAM2 well represents in-situ measured size distributions of stratospheric background aerosol in the northern hemisphere mid-latitudes. Distinct differences can be seen when derived integrated aerosol parameters (surface area, effective radius) are compared with aerosol climatologies based on the SAGE II satellite instrument (derived by the University of Oxford and the NASA AMES laboratory). The bias between the model and the SAGE II data increases as the moment of the aerosol size distribution decreases. Thus the modeled effective radius show the strongest bias, followed by the aerosol surface area density. Correspondingly less biased are the higher moments volume area density and the mass density of the global stratospheric aerosol coverage. This finding supports the key finding No. 2 of the SPARC Assessment of Stratospheric Aerosol Properties (2006), where it was shown that during periods of very low aerosol load in the stratosphere, the consistency between in-situ and satellite measurements, which exist in a volcanically perturbed stratosphere, breaks down and significant

  2. Land cover maps, BVOC emissions, and SOA burden in a global aerosol-climate model

    NASA Astrophysics Data System (ADS)

    Stanelle, Tanja; Henrot, Alexandra; Bey, Isaelle

    2015-04-01

    It has been reported that different land cover representations influence the emission of biogenic volatile organic compounds (BVOC) (e.g. Guenther et al., 2006). But the land cover forcing used in model simulations is quite uncertain (e.g. Jung et al., 2006). As a consequence the simulated emission of BVOCs depends on the applied land cover map. To test the sensitivity of global and regional estimates of BVOC emissions on the applied land cover map we applied 3 different land cover maps into our global aerosol-climate model ECHAM6-HAM2.2. We found a high sensitivity for tropical regions. BVOCs are a very prominent precursor for the production of Secondary Organic Aerosols (SOA). Therefore the sensitivity of BVOC emissions on land cover maps impacts the SOA burden in the atmosphere. With our model system we are able to quantify that impact. References: Guenther et al. (2006), Estimates of global terrestrial isoprene emissions using MEGAN, Atmos. Chem. Phys., 6, 3181-3210, doi:10.5194/acp-6-3181-2006. Jung et al. (2006), Exploiting synergies of global land cover products for carbon cycle modeling, Rem. Sens. Environm., 101, 534-553, doi:10.1016/j.rse.2006.01.020.

  3. Aerosol Indirect Effects on Cirrus Clouds in Global Aerosol-Climate Models

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, K.; Wang, Y.; Neubauer, D.; Lohmann, U.; Ferrachat, S.; Zhou, C.; Penner, J.; Barahona, D.; Shi, X.

    2015-12-01

    Cirrus clouds play an important role in regulating the Earth's radiative budget and water vapor distribution in the upper troposphere. Aerosols can act as solution droplets or ice nuclei that promote ice nucleation in cirrus clouds. Anthropogenic emissions from fossil fuel and biomass burning activities have substantially perturbed and enhanced concentrations of aerosol particles in the atmosphere. Global aerosol-climate models (GCMs) have now been used to quantify the radiative forcing and effects of aerosols on cirrus clouds (IPCC AR5). However, the estimate uncertainty is very large due to the different representation of ice cloud formation and evolution processes in GCMs. In addition, large discrepancies have been found between model simulations in terms of the spatial distribution of ice-nucleating aerosols, relative humidity, and temperature fluctuations, which contribute to different estimates of the aerosol indirect effect through cirrus clouds. In this presentation, four GCMs with the start-of-the art representations of cloud microphysics and aerosol-cloud interactions are used to estimate the aerosol indirect effects on cirrus clouds and to identify the causes of the discrepancies. The estimated global and annual mean anthropogenic aerosol indirect effect through cirrus clouds ranges from 0.1 W m-2 to 0.3 W m-2 in terms of the top-of-the-atmosphere (TOA) net radiation flux, and 0.5-0.6 W m-2 for the TOA longwave flux. Despite the good agreement on global mean, large discrepancies are found at the regional scale. The physics behind the aerosol indirect effect is dramatically different. Our analysis suggests that burden of ice-nucleating aerosols in the upper troposphere, ice nucleation frequency, and relative role of ice formation processes (i.e., homogeneous versus heterogeneous nucleation) play key roles in determining the characteristics of the simulated aerosol indirect effects. In addition to the indirect effect estimate, we also use field campaign

  4. New developments in the representation of Saharan dust sources in the aerosol-climate model ECHAM6-HAM2

    NASA Astrophysics Data System (ADS)

    Heinold, B.; Tegen, I.; Schepanski, K.; Banks, J. R.

    2015-09-01

    In the aerosol-climate model ECHAM6-HAM2, dust source activation (DSA) observations from Meteosat Second Generation (MSG) satellite are proposed to replace the original source area parameterization over the Sahara Desert. The new setup is tested in nudged simulations for the period 2007 to 2008. The evaluation is based on comparisons to dust emission events inferred from MSG dust index imagery, AERONET sun photometer observations, and satellite retrievals of aerosol optical thickness (AOT). The model results agree well with AERONET measurements. Good correlations between model results and MSG-SEVIRI dust AOT as well as Multi-angle Imaging Spectro-Radiometer (MISR) AOT indicate that also the spatial dust distribution is well reproduced. ECHAM6-HAM2 computes a more realistic geographical distribution and up to 20 % higher annual Saharan dust emissions, using the MSG-based source map. The representation of dust AOT is partly improved in the southern Sahara and Sahel. In addition, the spatial variability is increased towards a better agreement with observations depending on the season. Thus, using the MSG DSA map can help to circumvent the issue of uncertain soil input parameters. An important issue remains the need to improve the model representation of moist convection and stable nighttime conditions. Compared to sub-daily DSA information from MSG-SEVIRI and results from a regional model, ECHAM6-HAM2 notably underestimates the important fraction of morning dust events by the breakdown of the nocturnal low-level jet, while a major contribution is from afternoon-to-evening emissions.

  5. New developments in the representation of Saharan dust sources in the aerosol-climate model ECHAM6-HAM2

    NASA Astrophysics Data System (ADS)

    Heinold, Bernd; Tegen, Ina; Schepanski, Kerstin; Banks, Jamie R.

    2016-02-01

    In the aerosol-climate model ECHAM6-HAM2, dust source activation (DSA) observations from Meteosat Second Generation (MSG) satellite are proposed to replace the original source area parameterization over the Sahara Desert. The new setup is tested in nudged simulations for the period 2007 to 2008. The evaluation is based on comparisons to dust emission events inferred from MSG dust index imagery, Aerosol Robotic Network (AERONET) sun photometer observations, and satellite retrievals of aerosol optical thickness (AOT).The model results agree well with AERONET measurements especially in terms of seasonal variability, and a good spatial correlation was found between model results and MSG-SEVIRI (Spinning-Enhanced Visible and InfraRed Imager) dust AOT as well as Multi-angle Imaging SpectroRadiometer (MISR) AOT. ECHAM6-HAM2 computes a more realistic geographical distribution and up to 20 % higher annual Saharan dust emissions, using the MSG-based source map. The representation of dust AOT is partly improved in the southern Sahara and Sahel. In addition, the spatial variability is increased towards a better agreement with observations depending on the season. Thus, using the MSG DSA map can help to circumvent the issue of uncertain soil input parameters.An important issue remains the need to improve the model representation of moist convection and stable nighttime conditions. Compared to sub-daily DSA information from MSG-SEVIRI and results from a regional model, ECHAM6-HAM2 notably underestimates the important fraction of morning dust events by the breakdown of the nocturnal low-level jet, while a major contribution is from afternoon-to-evening emissions.

  6. Evaluation of the sectional aerosol microphysics module SALSA implementation in ECHAM5-HAM aerosol-climate model

    NASA Astrophysics Data System (ADS)

    Bergman, T.; Kerminen, V.-M.; Korhonen, H.; Lehtinen, K. J.; Makkonen, R.; Arola, A.; Mielonen, T.; Romakkaniemi, S.; Kulmala, M.; Kokkola, H.

    2011-12-01

    We present the implementation and evaluation of a sectional aerosol microphysics model SALSA within the aerosol-climate model ECHAM5-HAM. This aerosol microphysics module has been designed to be flexible and computationally efficient so that it can be implemented in regional or global scale models. The computational efficiency has been achieved by keeping the number of variables needed to describe the size and composition distribution to the minimum. The aerosol size distribution is described using 20 size sections with 10 size sections in size space which cover diameters ranging from 3 nm to 10 μm divided to three subranges each having distinct optimised process and compound selection. The ability of the module to describe the global aerosol properties was evaluated by comparison against (1) measured continental and marine size distributions, (2) observed variability of continental modal number concentrations, (3) measured sulphate, organic carbon, black carbon and sea salt mass concentrations, (4) observations of AOD and other aerosol optical properties from satellites and AERONET network, (5) global aerosol budgets and concentrations from previous model studies, and (6) model results using M7 which is the default aerosol microphysics module in ECHAM5-HAM. The evaluation shows that the global aerosol properties can be reproduced reasonably well using the coarse resolution of 10 size sections in size space. The simulated global aerosol budgets are within the range of previous studies. Surface concentrations of sea salt, sulphate and carbonaceous species have an annual mean within a factor of five of the observations, while the simulated sea salt concentrations reproduce the observations less accurately and show high variability. Regionally, AOD is in relatively good agreement with the observations (within a factor of two). At mid-latitudes the observed AOD is captured well, while at high-latitudes as well as in some polluted and dust regions the modeled AOD is

  7. Technical Note: On the use of nudging for aerosol-climate model intercomparison studies

    DOE PAGES

    Zhang, K.; Wan, H.; Liu, X.; ...

    2014-04-24

    Nudging is an assimilation technique widely used in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosolmore » concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects through ice clouds, since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less

  8. Technical Note: On the Use of Nudging for Aerosol-Climate Model Intercomparison Studies

    SciTech Connect

    Zhang, Kai; Wan, Hui; Liu, Xiaohong; Ghan, Steven J.; Kooperman, G. J.; Ma, Po-Lun; Rasch, Philip J.; Neubauer, David; Lohmann, U.

    2014-08-26

    Nudging is an assimilation technique widely used in the development and evaluation of climate models. Con- straining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the artificial forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the relatively strong sensitivity of homogeneous ice nucleation to aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests that nudging the horizontal winds but not temperature is a good strategy, especially for studies that involve both warm and cold clouds.

  9. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    NASA Astrophysics Data System (ADS)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support

  10. Indian monsoon and the elevated-heat-pump mechanism in a coupled aerosol-climate model

    NASA Astrophysics Data System (ADS)

    D'Errico, Miriam; Cagnazzo, Chiara; Fogli, Pier Giuseppe; Lau, William K. M.; Hardenberg, Jost; Fierli, Federico; Cherchi, Annalisa

    2015-09-01

    A coupled aerosol-atmosphere-ocean-sea ice climate model is used to explore the interaction between aerosols and the Indian summer monsoon precipitation on seasonal-to-interannual time scales. Results show that when increased aerosol loading is found on the Himalayas slopes in the premonsoon period (April-May), intensification of early monsoon rainfall over India and increased low-level westerly flow follow, in agreement with the elevated-heat-pump mechanism. The increase in rainfall during the early monsoon season has a cooling effect on the land surface. In the same period, enhanced surface cooling may also be amplified through solar dimming by more cloudiness and aerosol loading, via increased dust transported by low-level westerly flow. The surface cooling causes subsequent reduction in monsoon rainfall in July-August over India. The time-lagged nature of the reasonably realistic response of the model to aerosol forcing suggests that absorbing aerosols, besides their potential key roles in impacting monsoon water cycle and climate, may influence the seasonal variability of the Indian summer monsoon.

  11. Aerosol-climate interactions in the Norwegian Earth System Model - NorESM1-M

    NASA Astrophysics Data System (ADS)

    Kirkevåg, A.; Iversen, T.; Seland, Ø.; Hoose, C.; Kristjánsson, J. E.; Struthers, H.; Ekman, A. M. L.; Ghan, S.; Griesfeller, J.; Nilsson, E. D.; Schulz, M.

    2013-02-01

    The objective of this study is to document and evaluate recent changes and updates to the module for aerosols and aerosol-cloud-radiation interactions in the atmospheric module CAM4-Oslo of the core version of the Norwegian Earth System Model (NorESM), NorESM1-M. Particular attention is paid to the role of natural organics, sea salt, and mineral dust in determining the gross aerosol properties as well as the anthropogenic contribution to these properties and the associated direct and indirect radiative forcing. The aerosol module is extended from earlier versions that have been published, and includes life-cycling of sea salt, mineral dust, particulate sulphate, black carbon, and primary and secondary organics. The impacts of most of the numerous changes since previous versions are thoroughly explored by sensitivity experiments. The most important changes are: modified prognostic sea salt emissions; updated treatment of precipitation scavenging and gravitational settling; inclusion of biogenic primary organics and methane sulphonic acid (MSA) from oceans; almost doubled production of land-based biogenic secondary organic aerosols (SOA); and increased ratio of organic matter to organic carbon (OM/OC) for biomass burning aerosols from 1.4 to 2.6. Compared with in situ measurements and remotely sensed data, the new treatments of sea salt and dust aerosols give smaller biases in near-surface mass concentrations and aerosol optical depth than in the earlier model version. The model biases for mass concentrations are approximately unchanged for sulphate and BC. The enhanced levels of modeled OM yield improved overall statistics, even though OM is still underestimated in Europe and overestimated in North America. The global anthropogenic aerosol direct radiative forcing (DRF) at the top of the atmosphere has changed from a small positive value to -0.08 W m-2 in CAM4-Oslo. The sensitivity tests suggest that this change can be attributed to the new treatment of biomass

  12. Aerosol-climate interactions in the Norwegian Earth System Model - NorESM

    NASA Astrophysics Data System (ADS)

    Kirkevåg, A.; Iversen, T.; Seland, Ø.; Hoose, C.; Kristjánsson, J. E.; Struthers, H.; Ekman, A. M. L.; Ghan, S.; Griesfeller, J.; Nilsson, E. D.; Schulz, M.

    2012-09-01

    The objective of this study is to document and evaluate recent changes and updates to the module for aerosols and aerosol-cloud-radiation interactions in the atmospheric module CAM4-Oslo of the Norwegian Earth System Model (NorESM). Particular attention is paid to the role of natural organics, sea salt, and mineral dust in determining the gross aerosol properties as well as the anthropogenic contribution to these properties and the associated direct and indirect radiative forcing. The aerosol module is extended from earlier versions that have been published, and includes life-cycling of sea-salt, mineral dust, particulate sulphate, black carbon, and primary and secondary organics. The impacts of most of the numerous changes since previous versions are thoroughly explored by sensitivity experiments. The most important changes are: modified prognostic sea salt emissions; updated treatment of precipitation scavenging and gravitational settling; inclusion of biogenic primary organics and methane sulphonic acid (MSA) from oceans; almost doubled production of land-based biogenic secondary organic aerosols (SOA); and increased ratio of organic matter to organic carbon (OM / OC) for biomass burning aerosols from 1.4 to 2.6. Compared with in-situ measurements and remotely sensed data, the new treatments of sea salt and dust aerosols give smaller biases in near surface mass concentrations and aerosol optical depth than in the earlier model version. The model biases for mass concentrations are approximately unchanged for sulphate and BC. The enhanced levels of modeled OM yield improved overall statistics, even though OM is still underestimated in Europe and over-estimated in North America. The global direct radiative forcing (DRF) at the top of the atmosphere has changed from a small positive value to -0.08 W m-2 in CAM4-Oslo. The sensitivity tests suggest that this change can be attributed to the new treatment of biomass burning aerosols and gravitational settling. Although

  13. The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations

    NASA Astrophysics Data System (ADS)

    Zhang, K.; O'Donnell, D.; Kazil, J.; Stier, P.; Kinne, S.; Lohmann, U.; Ferrachat, S.; Croft, B.; Quaas, J.; Wan, H.; Rast, S.; Feichter, J.

    2012-10-01

    This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Nudged simulations of the year 2000 are carried out to compare the aerosol properties and global distribution in HAM1 and HAM2, and to evaluate them against various observations. Sensitivity experiments are performed to help identify the impact of each individual update in model formulation. Results indicate that from HAM1 to HAM2 there is a marked weakening of aerosol water uptake in the lower troposphere, reducing the total aerosol water burden from 75 Tg to 51 Tg. The main reason is the newly introduced κ-Köhler-theory-based water uptake scheme uses a lower value for the maximum relative humidity cutoff. Particulate organic matter loading in HAM2 is considerably higher in the upper troposphere, because the explicit treatment of secondary organic aerosols allows highly volatile oxidation products of the precursors to be vertically transported to regions of very low temperature and to form aerosols there. Sulfate, black carbon, particulate organic matter and mineral dust in HAM2 have longer lifetimes than in HAM1 because of weaker in-cloud scavenging, which is in turn related to lower autoconversion efficiency in the newly introduced two-moment cloud microphysics scheme. Modification in the sea salt emission scheme causes a significant increase in the ratio (from 1.6 to 7.7) between accumulation mode and coarse mode emission fluxes of aerosol number concentration. This

  14. Evaluation of the sectional aerosol microphysics module SALSA implementation in ECHAM5-HAM aerosol-climate model

    NASA Astrophysics Data System (ADS)

    Bergman, T.; Kerminen, V.-M.; Korhonen, H.; Lehtinen, K. J.; Makkonen, R.; Arola, A.; Mielonen, T.; Romakkaniemi, S.; Kulmala, M.; Kokkola, H.

    2012-06-01

    We present the implementation and evaluation of a sectional aerosol microphysics module SALSA within the aerosol-climate model ECHAM5-HAM. This aerosol microphysics module has been designed to be flexible and computationally efficient so that it can be implemented in regional or global scale models. The computational efficiency has been achieved by minimising the number of variables needed to describe the size and composition distribution. The aerosol size distribution is described using 10 size classes with parallel sections which can have different chemical compositions. Thus in total, the module tracks 20 size sections which cover diameters ranging from 3 nm to 10 μm and are divided into three subranges, each with an optimised selection of processes and compounds. The implementation of SALSA into ECHAM5-HAM includes the main aerosol processes in the atmosphere: emissions, removal, radiative effects, liquid and gas phase sulphate chemistry, and the aerosol microphysics. The aerosol compounds treated in the module are sulphate, organic carbon, sea salt, black carbon, and mineral dust. In its default configuration, ECHAM5-HAM treats aerosol size distribution using the modal method. In this implementation, the aerosol processes were converted to be used in a sectional model framework. The ability of the module to describe the global aerosol properties was evaluated by comparing against (1) measured continental and marine size distributions, (2) observed variability of continental number concentrations, (3) measured sulphate, organic carbon, black carbon and sea-salt mass concentrations, (4) observations of aerosol optical depth (AOD) and other aerosol optical properties from satellites and AERONET network, (5) global aerosol budgets and concentrations from previous model studies, and (6) model results using M7, which is the default aerosol microphysics module in ECHAM5-HAM. The evaluation shows that the global aerosol properties can be reproduced reasonably well

  15. Spatial distributions and seasonal cycles of aerosol climate effects in India seen in a global climate-aerosol model

    NASA Astrophysics Data System (ADS)

    Henriksson, S. V.; Pietikäinen, J.-P.; Hyvärinen, A.-P.; Räisänen, P.; Kupiainen, K.; Tonttila, J.; Hooda, R.; Lihavainen, H.; O'Donnell, D.; Backman, L.; Klimont, Z.; Laaksonen, A.

    2014-09-01

    Climate-aerosol interactions in India are studied by employing the global climate-aerosol model ECHAM5-HAM and the GAINS inventory for anthropogenic aerosol emissions. Model validation is done for black carbon surface concentrations in Mukteshwar and for features of the monsoon circulation. Seasonal cycles and spatial distributions of radiative forcing and the temperature and rainfall responses are presented for different model setups. While total aerosol radiative forcing is strongest in the summer, anthropogenic forcing is considerably stronger in winter than in summer. Local seasonal temperature anomalies caused by aerosols are mostly negative with some exceptions, e.g., parts of northern India in March-May. Rainfall increases due to the elevated heat pump (EHP) mechanism and decreases due to solar dimming mechanisms (SDMs) and the relative strengths of these effects during different seasons and for different model setups are studied. Aerosol light absorption does increase rainfall in northern India, but effects due to solar dimming and circulation work to cancel the increase. The total aerosol effect on rainfall is negative for northern India in the months of June-August, but during March-May the effect is positive for most model setups. These differences between responses in different seasons might help converge the ongoing debate on the EHPs and SDMs. Due to the complexity of the problem and known or potential sources for error and bias, the results should be interpreted cautiously as they are completely dependent on how realistic the model is. Aerosol-rainfall correlations and anticorrelations are shown not to be a reliable sole argument for deducing causality.

  16. Sensitivity of Remote Aerosol Distributions to Representation of Cloud-Aerosol Interactions in a Global Climate Model

    SciTech Connect

    Wang, Hailong; Easter, Richard C.; Rasch, Philip J.; Wang, Minghuai; Liu, Xiaohong; Ghan, Steven J.; Qian, Yun; Yoon, Jin-Ho; Ma, Po-Lun; Vinoj, V.

    2013-06-05

    Many global aerosol and climate models, including the widely used Community Atmosphere Model version 5 (CAM5), have large biases in predicting aerosols in remote regions such as upper troposphere and high latitudes. In this study, we conduct CAM5 sensitivity simulations to understand the role of key processes associated with aerosol transformation and wet removal affecting the vertical and horizontal long-range transport of aerosols to the remote regions. Improvements are made to processes that are currently not well represented in CAM5, which are guided by surface and aircraft measurements together with results from a multi-scale aerosol-climate model (PNNL-MMF) that explicitly represents convection and aerosol-cloud interactions at cloud-resolving scales. We pay particular attention to black carbon (BC) due to its importance in the Earth system and the availability of measurements. We introduce into CAM5 a new unified scheme for convective transport and aerosol wet removal with explicit aerosol activation above convective cloud base. This new implementation reduces the excessive BC aloft to better simulate observed BC profiles that show decreasing mixing ratios in the mid- to upper-troposphere. After implementing this new unified convective scheme, we examine wet removal of submicron aerosols that occurs primarily through cloud processes. The wet removal depends strongly on the sub-grid scale liquid cloud fraction and the rate of conversion of liquid water to precipitation. These processes lead to very strong wet removal of BC and other aerosols over mid- to high latitudes during winter months. With our improvements, the Arctic BC burden has a10-fold (5-fold) increase in the winter (summer) months, resulting in a much better simulation of the BC seasonal cycle as well. Arctic sulphate and other aerosol species also increase but to a lesser extent. An explicit treatment of BC aging with slower aging assumptions produces an additional 30-fold (5-fold) increase in

  17. The Impact of humidity above stratiform clouds on indirect aerosol climate forcing

    SciTech Connect

    Ackerman, A S; Kirkpatrick, M P; Stevens, D E; Toon, O B

    2004-12-20

    Some of the global warming effect of anthropogenic greenhouse gases is offset by increased solar reflection from clouds with smaller droplets that form on increased numbers of cloud condensation nuclei in polluted air. The global magnitude of the resulting indirect aerosol climate forcing is estimated to be comparable (and opposed) to the anthropogenic carbon dioxide forcing, but estimates are highly uncertain because of complexities in characterizing the physical process that determine global aerosol and cloud populations and their interactions. Beyond reflecting sunlight more effectively, smaller droplets are less efficient at producing precipitation, and decreased precipitation is expected to result in increased cloud water and cloud cover, further increasing the indirect forcing. Yet polluted marine boundary-layer clouds are not generally observed to hold more water. Here we use model simulations of stratocumulus clouds to show that suppression of precipitation from increased droplet concentrations leads to increased cloud water only when sufficient precipitation reaches the surface, a condition favored when the overlying air is moist. Otherwise, aerosol induced suppression of precipitation enhances entrainment of overlying dry air, thereby reducing cloud water and diminishing the indirect climate forcing.

  18. Aerosol Climate Time Series Evaluation In ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, T.; de Leeuw, G.; Pinnock, S.

    2015-12-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. By the end of 2015 full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which are also validated. The paper will summarize and discuss the results of major reprocessing and validation conducted in 2015. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products

  19. Aerosol Climate Time Series in ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2016-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension

  20. Beyond MODIS: Developing an aerosol climate data record

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Mattoo, S.; Munchak, L. A.; Patadia, F.; Laszlo, I.; Holz, R.

    2013-12-01

    making' assumptions such as cloud masking and pixel selection, as well as 'retrieval' assumptions such as aerosol type, and surface reflectance model. Also there are instrument issues such as calibration and geo-location, which even on the level of 1-2%, will lead to 10% error in retrieved AOD. At this point, however, many of these issues have been solved, or are being quantified for MODIS and VIIRS. In the past year, we created a generic dark-target aerosol retrieval algorithm, which can be applied to MODIS, VIIRS, or any other sensor with a similar set of wavelength bands. We applied the same radiative transfer codes for creating lookup tables, the same protocols for deriving non-aerosol assumptions, and the same criteria for cloud masking. Although there are still inconsistencies to work out, this generic algorithm is being applied to selected months having VIIRS/MODIS overlap. Comparing to AERONET, and with each other, we quantify the statistical agreement between MODIS and VIIRS, both for the official algorithms run on each sensor, as well as for our generic algorithm run on both.

  1. Constraining cloud lifetime effects of aerosols using A-Train satellite observations

    NASA Astrophysics Data System (ADS)

    Wang, Minghuai; Ghan, Steven; Liu, Xiaohong; L'Ecuyer, Tristan S.; Zhang, Kai; Morrison, Hugh; Ovchinnikov, Mikhail; Easter, Richard; Marchand, Roger; Chand, Duli; Qian, Yun; Penner, Joyce E.

    2012-08-01

    Aerosol indirect effects have remained the largest uncertainty in estimates of the radiative forcing of past and future climate change. Observational constraints on cloud lifetime effects are particularly challenging since it is difficult to separate aerosol effects from meteorological influences. Here we use three global climate models, including a multi-scale aerosol-climate model PNNL-MMF, to show that the dependence of the probability of precipitation on aerosol loading, termed the precipitation frequency susceptibility (Spop), is a good measure of the liquid water path response to aerosol perturbation (λ), as both Spop and λ strongly depend on the magnitude of autoconversion, a model representation of precipitation formation via collisions among cloud droplets. This provides a method to use satellite observations to constrain cloud lifetime effects in global climate models. Spop in marine clouds estimated from CloudSat, MODIS and AMSR-E observations is substantially lower than that from global climate models and suggests a liquid water path increase of less than 5% from doubled cloud condensation nuclei concentrations. This implies a substantially smaller impact on shortwave cloud radiative forcing over ocean due to aerosol indirect effects than simulated by current global climate models (a reduction by one-third for one of the conventional aerosol-climate models). Further work is needed to quantify the uncertainties in satellite-derived estimates of Spop and to examine Spop in high-resolution models.

  2. From MODIS to VIIRS: Steps toward continuing the dark-target aerosol climate data record

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Mattoo, S.; Liu, H.; Munchak, L. A.; Laszlo, I.; Cronk, H.

    2012-12-01

    By this fall-2012 AGU meeting, the Moderate Resolution Imaging Spectrometer (MODIS) has been flying on NASA's Terra and Aqua satellites for 13 years and 10.5 years, respectively. During this time, the MODIS Aerosol Science Team has fine-tuned the aerosol retrieval algorithms and data processing protocols, resulting in a highly robust, stable and usable aerosol product. The aerosol optical depth (AOD) product has been validated extensively, and the MODIS-retrieved environmental data record (EDR) is becoming a strong foundation for creating an aerosol climate data record (CDR). With last year's launch of the Visible and Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP, the VIIRS-derived aerosol product has been designed to continue that provided by MODIS. VIIRS and MODIS have similar orbital mechanics and provide similar spectral resolution with similar spatial resolution. At the same time, the VIIRS and MODIS aerosol algorithms have similar physical assumptions. In fact, the initial validation exercises suggest that, in general, the VIIRS aerosol product is performing well, and that the expected error for the VIIRS-derived AOD is similar to that reported by MODIS. Although VIIRS should be able to derive an aerosol product similar in quality to MODIS, can the VIIRS aerosol record be "stitched" together with the MODIS record? To answer this question, instead of qualifying how similar they are, we need to quantify how their differences can and do impact the resulting aerosol products. There are instrumental differences, such as orbit altitude (805km versus 705km), spatial resolution (375m/750m versus 250m/500m/1000m), spectral differences, and sampling differences). There are pre-processing differences (cloud masking, gas correction assumptions, pixel selection protocols). There are retrieval algorithm differences, and of course final processing and quality control differences. Although we expect that most of differences have little or no impact, some may be

  3. Connecting Organic Aerosol Climate-Relevant Properties to Chemical Mechanisms of Sources and Processing

    SciTech Connect

    Thornton, Joel

    2015-01-26

    The research conducted on this project aimed to improve our understanding of secondary organic aerosol (SOA) formation in the atmosphere, and how the properties of the SOA impact climate through its size, phase state, and optical properties. The goal of this project was to demonstrate that the use of molecular composition information to mechanistically connect source apportionment and climate properties can improve the physical basis for simulation of SOA formation and properties in climate models. The research involved developing and improving methods to provide online measurements of the molecular composition of SOA under atmospherically relevant conditions and to apply this technology to controlled simulation chamber experiments and field measurements. The science we have completed with the methodology will impact the simulation of aerosol particles in climate models.

  4. Guidelines for the aerosol climatic effects special study: An element of the NASA climate research program

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Research to help develop better understanding of the role of aerosols in the Earth's radiative balance is summarized. Natural volcanic injections of aerosols into the stratosphere to understand and model any resultant evidence of climate change are considered. The approach involves: (1) measurements from aircraft, balloon and ground based platforms which complement and enhance the aerosol information derived from satellite data; (2) development of instruments required for some of these measurements; (3) theoretical and laboratory work to aid in interpreting and utilizing space based and in situ data; and (4) preparation for and execution of concentrated observations of stratospheric aerosols following a future large volcanic eruption.

  5. Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework

    SciTech Connect

    Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.

    2014-11-06

    In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice number is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.

  6. Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework

    DOE PAGES

    Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; ...

    2014-11-06

    In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice numbermore » is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.« less

  7. Aerosol climate effects and air quality impacts from 1980 to 2030

    SciTech Connect

    Menon, Surabi; Menon, Surabi; Unger, Nadine; Koch, Dorothy; Francis, Jennifer; Garrett, Tim; Sednev, Igor; Shindell, Drew; Streets, David

    2007-11-26

    We investigate aerosol effects on climate for 1980, 1995 (meant to reflect present-day) and 2030 using the NASA Goddard Institute for Space Studies climate model coupled to an on-line aerosol source and transport model with interactive oxidant and aerosol chemistry. Aerosols simulated include sulfates, organic matter (OM), black carbon (BC), sea-salt and dust and additionally, the amount of tropospheric ozone is calculated, allowing us to estimate both changes to air quality and climate for different time periods and emission amounts. We include both the direct aerosol effect and indirect aerosol effects for liquid-phase clouds. Future changes for the 2030 A1B scenario are examined, focusing on the Arctic and Asia, since changes are pronounced in these regions. Our results for the different time periods include both emission changes and physical climate changes. We find that the aerosol indirect effect (AIE) has a large impact on photochemical processing, decreasing ozone amount and ozone forcing, especially for the future (2030-1995). Ozone forcings increase from 0 to 0.12 Wm{sup -2} and the total aerosol forcing increases from -0.10 Wm{sup -2} to -0.94 Wm{sup -2} (AIE increases from -0.13 to -0.68 Wm{sup -2}) for 1995-1980 versus 2030-1995. Over the Arctic we find that compared to ozone and the direct aerosol effect, the AIE contributes the most to net radiative flux changes. The AIE, calculated for 1995-1980, is positive (1.0 Wm{sup -2}), but the magnitude decreases (-0.3Wm{sup -2}) considerably for the future scenario. Over Asia, we evaluate the role of biofuel and transportation-based emissions (for BC and OM) via a scenario (2030A) that includes a projected increase (factor of two) in biofuel and transport-based emissions for 2030 A1B over Asia. Projected changes from present-day due to the 2030A emissions versus 2030 A1B are a factor of 4 decrease in summertime precipitation in Asia. Our results are sensitive to emissions used. Uncertainty in present

  8. The impact of humidity above stratiform clouds on indirect aerosol climate forcing.

    PubMed

    Ackerman, Andrew S; Kirkpatrick, Michael P; Stevens, David E; Toon, Owen B

    2004-12-23

    Some of the global warming from anthropogenic greenhouse gases is offset by increased reflection of solar radiation by clouds with smaller droplets that form in air polluted with aerosol particles that serve as cloud condensation nuclei. The resulting cooling tendency, termed the indirect aerosol forcing, is thought to be comparable in magnitude to the forcing by anthropogenic CO2, but it is difficult to estimate because the physical processes that determine global aerosol and cloud populations are poorly understood. Smaller cloud droplets not only reflect sunlight more effectively, but also inhibit precipitation, which is expected to result in increased cloud water. Such an increase in cloud water would result in even more reflective clouds, further increasing the indirect forcing. Marine boundary-layer clouds polluted by aerosol particles, however, are not generally observed to hold more water. Here we simulate stratocumulus clouds with a fluid dynamics model that includes detailed treatments of cloud microphysics and radiative transfer. Our simulations show that the response of cloud water to suppression of precipitation from increased droplet concentrations is determined by a competition between moistening from decreased surface precipitation and drying from increased entrainment of overlying air. Only when the overlying air is humid or droplet concentrations are very low does sufficient precipitation reach the surface to allow cloud water to increase with droplet concentrations. Otherwise, the response of cloud water to aerosol-induced suppression of precipitation is dominated by enhanced entrainment of overlying dry air. In this scenario, cloud water is reduced as droplet concentrations increase, which diminishes the indirect climate forcing.

  9. Impacts of oxidation aging on secondary organic aerosol formation, particle growth rate, cloud condensation nuclei abundance, and aerosol climate forcing

    NASA Astrophysics Data System (ADS)

    Yu, F.; Luo, G.

    2014-12-01

    Particle composition measurements indicate that organic aerosol (OA) makes up ~20-90% of submicron particulate mass and secondary OA (SOA) accounts for a large fraction (~ 72 ±21%) of these OA masses at many locations around the globe. The volatility changes of secondary organic gases (SOG) associated with oxidation aging as well as the contribution of highly oxidized low volatile SOG (LV-SOG) to the condensational growth of secondary particles have been found to be important in laboratory and field measurements but are poorly represented in global models. A novel scheme to extend the widely used two-product SOA formation model, by adding a third product arising from the oxidation aging (i.e., LV-SOG) and considering the dynamic transfer of mass from higher to lower volatile products, has been developed and implemented into a global chemical transport model (GEOS-Chem) and a community atmosphere model (CESM-CAM5). The scheme requires only minor changes to the existing two-product SOA formation model and is computationally efficient. With the oxidation rate constrained by laboratory measurements, we show that the new scheme predicts a much higher SOA mass concentrations, improving the agreement with aerosol mass spectrometer SOA measurements. The kinetic condensation of LV-SOG on ultrafine particles, simulated by a size-resolved (sectional) advanced particle microphysics (APM) model incorporated into in GEOS-Chem and CAM5, increases the particle growth rate substantially and improves the agreement of simulated cloud condensation nuclei (CCN) concentrations with observations. Based on GEOS-Chem-APM simulations, the new SOA formation scheme increases global mean low troposphere SOA mass concentration by ~130% and CCN abundance by ~ 15%, and optical depth of secondary particles and coated black carbon and primary organic carbon particles by ~10%. As a result, aerosol radiative cooling effect (direct + first indirect) is enhanced by -0.9 W/m2, with large spatial

  10. Production of satellite-derived aerosol climate data records: current status of the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    de Leeuw, Gerrit; Holzer-Popp, Thomas; Pinnock, Simon

    2015-04-01

    and the Aerosol_cci team Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (Phase 1: 2010 -2014; Phase 2: 2014-2017) intensive work has been conducted to improve algorithms for the retrieval of aerosol information from European sensors ATSR (3 algorithms), PARASOL, MERIS (3 algorithms), synergetic AATSR/SCIAMACHY, OMI and GOMOS. Whereas OMI and GOMOS were used to derive absorbing aerosol index and stratospheric extinction profiles, respectively, Aerosol Optical Depth (AOD) and Ångström coefficient were retrieved from the other sensors. The cooperation between the project partners, including both the retrieval teams and independent validation teams, has resulted in a strong improvement of most algorithms. In particular the AATSR retrieved AOD is qualitatively similar to that from MODIS, usually taken as the standard, MISR and SeaWiFS. This conclusion has been reached form several different ways of validation of the L2 and L3 products, using AERONET sun photometer data as the common ground-truth for the application of both 'traditional' statistical techniques and a 'scoring' technique using spatial and temporal correlations. Quantitatively, the limited AATSR swath width of 500km results in a smaller amount of data. Nevertheless, the assimilation of AATSR-retrieved AOD, together with MODIS data, contributes to improving the in the ECMWF climate model results. In addition to the multi-spectral AOD, and thus the Ångström Exponent, also a per-pixel uncertainty is provided and validated. By the end of Aerosol_cci Phase 1 the ATSR algorithms have been applied to both ATSR-2 and AATSR resulting in an AOD time series of 17 years. In phase 2 this work is continued with a focus on the further improvement of the ATSR algorithms as well as those for the other instruments and algorithms, mentioned above, which in phase 1 were considered less mature. The first efforts are on the further characterization of the uncertainties and on better understanding of the

  11. Confronting AeroCom models with particle size distribution data from surface in situ stations

    NASA Astrophysics Data System (ADS)

    Platt, Stephen; Fiebig, Markus; Mann, Graham; Schulz, Michael

    2016-04-01

    The size distribution is the most important property for describing any interaction of an aerosol particle population with its surroundings. In first order, it determines both, the aerosol optical properties quantifying the direct aerosol climate effect, and the fraction of aerosol particles acting as cloud condensation nuclei quantifying the indirect aerosol climate effect. Aerosol schemes of modern climate models resolve the aerosol particle size distribution (APSD) explicitly. In improving the skill of climate models, it is therefore highly useful to confront these models with precision APSD data observed at surface stations. Corresponding previous work focussed on comparing size integrated, seasonal particle concentrations at selected sites with ensemble model averages to assess overall model skill. Building on this work, this project intends to refine the approach by comparing median particle size and integral concentration of fitted modal size distributions. It will also look at skill differences between models in order to find reasons for matches and discrepancies. The presentation will outline the project, and will elaborate on input requested from modelling groups to participate in the exercise.

  12. Aerosol climate time series from ESA Aerosol_cci (Invited)

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.

    2013-12-01

    Within the ESA Climate Change Initiative (CCI) the Aerosol_cci project (mid 2010 - mid 2013, phase 2 proposed 2014-2016) has conducted intensive work to improve algorithms for the retrieval of aerosol information from European sensors AATSR (3 algorithms), PARASOL, MERIS (3 algorithms), synergetic AATSR/SCIAMACHY, OMI and GOMOS. Whereas OMI and GOMOS were used to derive absorbing aerosol index and stratospheric extinction profiles, respectively, Aerosol Optical Depth (AOD) and Angstrom coefficient were retrieved from the other sensors. Global datasets for 2008 were produced and validated versus independent ground-based data and other satellite data sets (MODIS, MISR). An additional 17-year dataset is currently generated using ATSR-2/AATSR data. During the three years of the project, intensive collaborative efforts were made to improve the retrieval algorithms focusing on the most critical modules. The team agreed on the use of a common definition for the aerosol optical properties. Cloud masking was evaluated, but a rigorous analysis with a pre-scribed cloud mask did not lead to improvement for all algorithms. Better results were obtained using a post-processing step in which sudden transitions, indicative of possible occurrence of cloud contamination, were removed. Surface parameterization, which is most critical for the nadir only algorithms (MERIS and synergetic AATSR / SCIAMACHY) was studied to a limited extent. The retrieval results for AOD, Ångström exponent (AE) and uncertainties were evaluated by comparison with data from AERONET (and a limited amount of MAN) sun photometer and with satellite data available from MODIS and MISR. Both level2 and level3 (gridded daily) datasets were validated. Several validation metrics were used (standard statistical quantities such as bias, rmse, Pearson correlation, linear regression, as well as scoring approaches to quantitatively evaluate the spatial and temporal correlations against AERONET), and in some cases developed further, to evaluate the datasets and their regional and seasonal merits. The validation showed that most datasets have improved significantly and in particular PARASOL (ocean only) provides excellent results. The metrics for AATSR (land and ocean) datasets are similar to those of MODIS and MISR, with AATSR better in some land regions and less good in some others (ocean). However, AATSR coverage is smaller than that of MODIS due to swath width. The MERIS dataset provides better coverage than AATSR but has lower quality (especially over land) than the other datasets. Also the synergetic AATSR/SCIAMACHY dataset has lower quality. The evaluation of the pixel uncertainties shows first good results but also reveals that more work needs to be done to provide comprehensive information for data assimilation. Users (MACC/ECMWF, AEROCOM) confirmed the relevance of this additional information and encouraged Aerosol_cci to release the current uncertainties. The paper will summarize and discuss the results of three year work in Aerosol_cci, extract the lessons learned and conclude with an outlook to the work proposed for the next three years. In this second phase a cyclic effort of algorithm evolution, dataset generation, validation and assessment will be applied to produce and further improve complete time series from all sensors under investigation, new sensors will be added (e.g. IASI), and preparation for the Sentinel missions will be made.

  13. Effect of aerosol subgrid variability on aerosol optical depth and cloud condensation nuclei: implications for global aerosol modelling

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Schutgens, Nick; Stier, Philip

    2016-11-01

    A fundamental limitation of grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid boxes, which can lead to discrepancies in simulated aerosol climate effects between high- and low-resolution models. This study investigates the impact of neglecting subgrid variability in present-day global microphysical aerosol models on aerosol optical depth (AOD) and cloud condensation nuclei (CCN). We introduce a novel technique to isolate the effect of aerosol variability from other sources of model variability by varying the resolution of aerosol and trace gas fields while maintaining a constant resolution in the rest of the model. We compare WRF-Chem (Weather and Research Forecast model) runs in which aerosol and gases are simulated at 80 km and again at 10 km resolutions; in both simulations the other model components, such as meteorology and dynamics, are kept at the 10 km baseline resolution. We find that AOD is underestimated by 13 % and CCN is overestimated by 27 % when aerosol and gases are simulated at 80 km resolution compared to 10 km. The processes most affected by neglecting aerosol subgrid variability are gas-phase chemistry and aerosol uptake of water through aerosol-gas equilibrium reactions. The inherent non-linearities in these processes result in large changes in aerosol properties when aerosol and gaseous species are artificially mixed over large spatial scales. These changes in aerosol and gas concentrations are exaggerated by convective transport, which transports these altered concentrations to altitudes where their effect is more pronounced. These results demonstrate that aerosol variability can have a large impact on simulating aerosol climate effects, even when meteorology and dynamics are held constant. Future aerosol model development should focus on accounting for the effect of subgrid variability on these

  14. A Simple Model of Global Aerosol Indirect Effects

    SciTech Connect

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, K. J.; Carslaw, K. S.; Pierce, Jeffrey; Bauer, Susanne E.; Adams, P. J.

    2013-06-28

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth’s energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically-based model expresses the aerosol indirect effect using analytic representations of droplet nucleation, cloud and aerosol vertical structure, and horizontal variability in cloud water and aerosol concentration. Although the simple model is able to produce estimates of aerosol indirect effects that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates are found to be sensitive to several uncertain parameters, including the preindustrial cloud condensation nuclei concentration, primary and secondary anthropogenic emissions, the size of the primary particles, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Aerosol indirect effects are surprisingly linear in emissions. This simple model provides a much stronger physical basis for representing aerosol indirect effects than previous representations in integrated assessment models designed to quickly explore the parameter space of emissions-climate interactions. The model also produces estimates that depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models.

  15. A Simple Model of Global Aerosol Indirect Effects

    NASA Technical Reports Server (NTRS)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  16. Simulations of Aerosol Microphysics in the NASA GEOS-5 Model

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; Smith; Randles; daSilva

    2010-01-01

    Aerosol-cloud-chemistry interactions have potentially large but uncertain impacts on Earth's climate. One path to addressing these uncertainties is to construct models that incorporate various components of the Earth system and to test these models against data. To that end, we have previously incorporated the Goddard Chemistry, Aerosol, Radiation, and Transport (GOCART) module online in the NASA Goddard Earth Observing System model (GEOS-5). GEOS-5 provides a platform for Earth system modeling, incorporating atmospheric and ocean general circulation models, a land surface model, a data assimilation system, and treatments of atmospheric chemistry and hydrologic cycle. Including GOCART online in this framework has provided a path for interactive aerosol-climate studies; however, GOCART only tracks the mass of aerosols as external mixtures and does not include the detailed treatments of aerosol size distribution and composition (internal mixtures) needed for aerosol-cloud-chemistry-climate studies. To address that need we have incorporated the Community Aerosol and Radiation Model for Atmospheres (CARMA) online in GEOS-5. CARMA is a sectional aerosol-cloud microphysical model, capable of treating both aerosol size and composition explicitly be resolving the aerosol distribution into a variable number of size and composition groupings. Here we present first simulations of dust, sea salt, and smoke aerosols in GEOS-5 as treated by CARMA. These simulations are compared to available aerosol satellite, ground, and aircraft data and as well compared to the simulated distributions in our current GOCART based system.

  17. Modeling Saharan dust emission and transport: sensitivity to emission parameterization schemes

    NASA Astrophysics Data System (ADS)

    Todd, Martin; Cavazos, Carolina; Chenglai, Wu; Wang, Yi; Lin, Zhaohui; Washington, Richard

    2013-04-01

    Mineral dust aerosols are an important component on the Earth System. Increasingly, the dust 'cycle' processes are being incorporated into numerical weather prediction models and Earth System models for climate analyses, to provide fully coupled aerosol-climate models. Dust emission is the fundamental process in the dust cycle but parameterising this in weather and climate models is challenging due to (i) the disparity in scale between the micro-scale emission processes and model grid cell resolution (ii) the lack of detailed soil and surface data over many desert regions (iii) the lack of adequate data for model validation. Previous studies indicate high uncertainty in model emission estimates. The project 'Fennec : the Saharan Climate System' provides a valuable test bed for comparing and validating model dust cycle processes. In this study an intercomparison of five widely-used dust emission parameterisations was conducted. The Marticorena & Bergametti (1995), Shao et al. (1996), Lu & Shao (1999), Shao (2001), and Shao (2004) schemes were coded into the WRF-CHEM model system. WRF-CHEM was configured over the Saharan domain with 3 nests of 27km-9km-3km grid resolution and run over the period June 2011 coincident with the Fennec Intensive Observation Period. We test the sensitivity of various dust cycle quantities (including dust emission, atmospheric load and continental scale dust budgets) to the emission scheme parameterisation. Based on the multi-scale model nesting this sensitivity assessment is analysed relation to the scale of the meteorological driving processes.

  18. Evidence for liquid-phase cirrus cloud formation from volcanic aerosols - Climatic implications

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth

    1992-01-01

    Supercooled droplets in cirrus uncinus cell heads between -40 and -50 C are identified from the First International Satellite Cloud Climatology Project Regional Experiment polarization lidar measurements. Although short-lived, complexes of these small liquid cells seem to have contributed importantly to the formation of the cirrus. Freezing-point depression effects in solution droplets, apparently resulting from relatively large cloud condensation nuclei of volcanic origin, can be used to explain this rare phenomenon. An unrecognized volcano-cirrus cloud climate feedback mechanism is implied by these findings.

  19. Evidence for liquid-phase cirrus cloud formation from volcanic aerosols: climatic implications.

    PubMed

    Sassen, K

    1992-07-24

    Supercooled droplets in cirrus uncinus cell heads between -40 degrees and -50 degrees C are identified from Project FIRE [First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment] polarization lidar measurements. Although short-lived, complexes of these small liquid cells seem to have contributed importantly to the formation of the cirrus. Freezing-point depression effects in solution droplets, apparently resulting from relatively large cloud condensation nuclei of volcanic origin, can be used to explain this rare phenomenon. An unrecognized volcano-cirrus cloud climate feedback mechanism is implied by these findings.

  20. Mechanisms of African Aerosol-Climate Interactions and their Forcing on Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Hosseinpour, F.; Wilcox, E. M.

    2012-12-01

    African climate is often interpreted as a continental scale phenomenon which could have strong complex remote influences on global atmospheric circulation, as well as large scale patterns of climate variability and climate change. More than 90% of aerosol optical depth (AOD) over West Africa, and more than 80% over the eastern tropical Atlantic Ocean during summer are contributed by dust particles (K. M. Lau et. al., 2009). Dust aerosols and smoke from wildfires can propagate vertically above the boundary layer, which allows zonal winds in mid-troposphere to transport them thousands of kilometers horizontally. This process can extend their atmospheric life cycles, and their global climatic impact. In this study, ensemble of the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and the Moderate Resolution Imaging Spectroradiometer (MODIS) data sets for warm season were used to better understand the long-term effect of African radiative forcing on large-scale dynamical systems, and their interaction with climatic circulations. In cloud free condition, strong positive correlation exists between the total precipitation and soil wetness over western tropics of Africa and AOD for dust particles over Northern Africa. Increasing absorption of short-wave radiation by smoke in South-West Africa is accompanied with more precipitation in the ITCZ, which is associated with positive core of vorticity at 500 hPa over western African tropical region. African dust and smoke has warming effect in the vertical averaged of troposphere. In warm season, the long-term average of atmospheric heating due to dust and smoke ranges from 10 to 35 Wm-2. Furthermore, the climatic variability is the least over western tropics of Africa which could be due to feedback of soil memory. Convection along with the net downward long-wave radiative flux at surface has increased over the northern African dust region, while these components have decreased over the southern African smoke region in the recent decade. We are exploring the mechanism to explain these relationships and their connections to or impacts on dust and smoke aerosols.

  1. Assessing the effects of anthropogenic aerosols on Pacific storm track using a multiscale global climate model.

    PubMed

    Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan H; Molina, Mario J

    2014-05-13

    Atmospheric aerosols affect weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the effects of anthropogenic aerosols on the Pacific storm track, using a multiscale global aerosol-climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and preindustrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by -2.5 and +1.3 W m(-2), respectively, by emission changes from preindustrial to present day, and an increased cloud top height indicates invigorated midlatitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides, for the first time to the authors' knowledge, a global perspective of the effects of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multiscale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on a global scale.

  2. Assessing the Effects of Anthropogenic Aerosols on Pacific Storm Track Using a Multiscale Global Climate Model

    SciTech Connect

    Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J.; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan; Molina, Mario J.

    2014-05-13

    Atmospheric aerosols impact weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the impacts of anthropogenic aerosols on the Pacific storm track using a multi-scale global aerosol-climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and pre-industrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by - 2.5 and + 1.3 W m-2, respectively, by emission changes from pre-industrial to present day, and an increased cloud-top height indicates invigorated mid-latitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides for the first time a global perspective of the impacts of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multi-scale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on the global scale.

  3. Modeling Study of the Effect of Anthropogenic Aerosols on Late Spring Drought in South China

    SciTech Connect

    Hu, Ning; Liu, Xiaohong

    2013-10-01

    In this study, the mechanisms underlying the decadal variability of late spring precipitation in south China are investigated using the latest version 1 of Community Earth System Model (CESM1). We aim to unravel the effects of different climate forcing agents, such as aerosols and greenhouse gases (GHGs), on the decadal variation of precipitation with transient experiments from pre-industry (for year 1850) to present-day (for year 2000). Our results reveal that: (1) CESM1 can reproduce the climatological features of atmospheric circulation and precipitation for the late spring in south China; (2) Only simulations including the forcing of anthropogenic aerosols can reproduce the observed decreasing trend of late spring precipitation from 1950-2000 in south China; (3) Aerosols affect the decadal change of precipitation mainly by altering the large scale atmospheric circulation, and to a less extent by increasing the lower-tropospheric stability to inhibit the convective precipitation; and (4) In comparison, other climate forcing agents, such as GHGs, have much smaller effects on the decadal change of spring precipitation in south China. Key words: precipitation, aerosols, climate change, south China, Community Earth System Model

  4. Global-mean temperature change from shipping toward 2050: improved representation of the indirect aerosol effect in simple climate models.

    PubMed

    Lund, Marianne Tronstad; Eyring, Veronika; Fuglestvedt, Jan; Hendricks, Johannes; Lauer, Axel; Lee, David; Righi, Mattia

    2012-08-21

    We utilize a range of emission scenarios for shipping to determine the induced global-mean radiative forcing and temperature change. Ship emission scenarios consistent with the new regulations on nitrogen oxides (NO(x)) and sulfur dioxide (SO(2)) from the International Maritime Organization and two of the Representative Concentration Pathways are used as input to a simple climate model (SCM). Based on a complex aerosol-climate model we develop and test new parametrizations of the indirect aerosol effect (IAE) in the SCM that account for nonlinearities in radiative forcing of ship-induced IAE. We find that shipping causes a net global cooling impact throughout the period 1900-2050 across all parametrizations and scenarios. However, calculated total net global-mean temperature change in 2050 ranges from -0.03[-0.07,-0.002]°C to -0.3[-0.6,-0.2]°C in the A1B scenario. This wide range across parametrizations emphasizes the importance of properly representing the IAE in SCMs and to reflect the uncertainties from complex global models. Furthermore, our calculations show that the future ship-induced temperature response is likely a continued cooling if SO(2) and NO(x) emissions continue to increase due to a strong increase in activity, despite current emission regulations. However, such cooling does not negate the need for continued efforts to reduce CO(2) emissions, since residual warming from CO(2) is long-lived.

  5. A modeling study of effective radiative forcing and climate response due to tropospheric ozone

    NASA Astrophysics Data System (ADS)

    Xie, Bing; Zhang, Hua; Wang, Zhili; Zhao, Shuyun; Fu, Qiang

    2016-07-01

    This study simulates the effective radiative forcing (ERF) of tropospheric ozone from 1850 to 2013 and its effects on global climate using an aerosol-climate coupled model, BCC AGCM2.0.1 CUACE/Aero, in combination with OMI (Ozone Monitoring Instrument) satellite ozone data. According to the OMI observations, the global annual mean tropospheric column ozone (TCO) was 33.9 DU in 2013, and the largest TCO was distributed in the belts between 30°N and 45°N and at approximately 30°S; the annual mean TCO was higher in the Northern Hemisphere than that in the Southern Hemisphere; and in boreal summer and autumn, the global mean TCO was higher than in winter and spring. The simulated ERF due to the change in tropospheric ozone concentration from 1850 to 2013 was 0.46 W m-2, thereby causing an increase in the global annual mean surface temperature by 0.36°C, and precipitation by 0.02 mm d-1 (the increase of surface temperature had a significance level above 95%). The surface temperature was increased more obviously over the high latitudes in both hemispheres, with the maximum exceeding 1.4°C in Siberia. There were opposite changes in precipitation near the equator, with an increase of 0.5 mm d-1 near the Hawaiian Islands and a decrease of about -0.6 mm d-1 near the middle of the Indian Ocean.

  6. Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects

    NASA Astrophysics Data System (ADS)

    Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.

    2014-01-01

    Atmospheric models often represent the aerosol particle size distribution with a modal approach, in which particles are described with log-normal modes within predetermined size ranges. This approach reallocates particles numerically from one mode to another for example during particle growth, potentially leading to artificial changes in the aerosol size distribution. In this study we analysed how the modal reallocation affects climate-relevant variables: cloud droplet number concentration (CDNC), aerosol-cloud interaction parameter (ACI) and light extinction coefficient (qext). The ACI parameter gives the response of CDNC to a change in total aerosol number concentration. We compared these variables between a modal model (with and without reallocation routines) and a high resolution sectional model, which was considered a reference model. We analysed the relative differences in the chosen variables in four experiments designed to assess the influence of atmospheric aerosol processes. We find that limiting the allowed size ranges of the modes, and subsequent remapping of the distribution, leads almost always to an underestimation of cloud droplet number concentrations (by up to 100%) and an overestimation of light extinction (by up to 20%). On the other hand, the aerosol-cloud interaction parameter can be either over- or underestimated by the reallocating model, depending on the conditions. For example, in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause on average a 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.

  7. Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects

    NASA Astrophysics Data System (ADS)

    Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.

    2013-08-01

    In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.

  8. Intercomparison of aerosol microphysics modules in the framework of the ECHAM5 climate model

    NASA Astrophysics Data System (ADS)

    Hommel, R.; Kokkola, H.; Kazil, J.; Niemeier, U.; Partanen, A. I.; Feichter, J.; Timmreck, C.

    2009-04-01

    Aerosols in the atmosphere are an elementary constituent of the atmospheric composition and affect the global climate through a variety of physical and chemical interactions in the troposphere and stratosphere. Large volcanic eruptions alter the Earth's radiative balance and interfere with the catalytic cycles of ozone depletion mainly by the formation of micrometer size aerosol particles above the tropopause. Recent experimental and numerical investigations of process oriented aerosol-climate interactions revealed that appropriate climate effects can only be modeled when informations about the aerosol size and number spectra are provided. Nevertheless in the majority of climate models volcanic perturbations of the stratosphere are either prescribed based on the aerosol parameters of interested (surface area, optical depth) or the aerosol microphysics is considered explicitly but with a heavily reduced number of degrees of freedom. This yields e.g. to underestimations of surface temperature effects in the fade of an eruption. To overcome that weakness, we tested three aerosol modules currently available in the framework of the climate model ECHAM5 in environmental conditions assumed to be representative in the stratosphere after the injection of SO2 from modest to large volcanic eruptions. The study focuses on the evolution of liquid H2SO4/H2O aerosol. The modal modal M7, currently the default aerosol scheme in ECHAM5, is compared with two sectional aerosol schemes: the moving centre sectional aerosol scheme SALSA, and the fixed sectional scheme SAM2. Since direct measurements of particle size informations during the initial stage of a volcanic injection in the stratosphere are not available, the detailed sectional aerosol model MAIA is used as a reference in this study. It is shown that all modules are able to represent a "typical" stratospheric background aerosol distribution when the particles are formed via the oxidation pathway of SO2. However, the modules

  9. Evaluation of autoconversion schemes in a single model framework with satellite observations

    NASA Astrophysics Data System (ADS)

    Michibata, Takuro; Takemura, Toshihiko

    2015-09-01

    We examined the performance of autoconversion (mass transfer from cloud water to rainwater by the coalescence of cloud droplets) schemes in warm rain, which are commonly used in general circulation models. To exclude biases in the different treatment of the aerosol-cloud-precipitation-radiation interaction other than that of the autoconversion process, sensitivity experiments were conducted within a single model framework using an aerosol-climate model, MIROC-SPRINTARS. The liquid water path (LWP) and cloud optical thickness have a particularly high sensitivity to the autoconversion schemes, and their sensitivity is of the same magnitude as model biases. In addition, the ratio of accretion to autoconversion (Acc/Aut ratio), a key parameter in the examination of the balance of microphysical conversion processes, also has a high sensitivity globally depending on the scheme used. Although the Acc/Aut ratio monotonically increases with increasing LWP, significantly lower ratio is observed in Kessler-type schemes. Compared to satellite observations, a poor representation of cloud macrophysical structure and optically thicker low cloud are found in simulations with any autoconversion scheme. As a result of the cloud-radiation interaction, the difference in the global mean net cloud radiative forcing (NetCRF) among the schemes reaches 10 Wm-2. The discrepancy between the observed and simulated NetCRF is especially large with a high LWP. The potential uncertainty in the parameterization of the autoconversion process is nonnegligible, and no formulation significantly improves the bias in the cloud radiative effect yet. This means that more fundamental errors are still left in other processes of the model.

  10. Dehydration effects from contrails in a coupled contrail-climate model

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Penner, J. E.; Chen, Yibin; Zhou, Cheng; Graf, K.

    2015-10-01

    The uptake of water by contrails in ice-supersaturated air and the release of water after ice particle advection and sedimentation dehydrates the atmosphere at flight levels and redistributes humidity mainly to lower levels. The dehydration is investigated by coupling a plume-scale contrail model with a global aerosol-climate model. The contrail model simulates all the individual contrails forming from global air traffic for meteorological conditions as defined by the climate model. The computed contrail cirrus properties compare reasonably with theoretical concepts and observations. The mass of water in aged contrails may exceed 106 times the mass of water emitted from aircraft. Many of the ice particles sediment and release water in the troposphere, on average 700 m below the mean flight levels. Simulations with and without coupling are compared. The drying at contrail levels causes thinner and longer-lived contrails with about 15 % reduced contrail radiative forcing (RF). The reduced RF from contrails is on the order of 0.06 W m-2, slightly larger than estimated earlier because of higher soot emissions. For normal traffic, the RF from dehydration is small compared to interannual variability. A case with emissions increased by 100 times is used to overcome statistical uncertainty. The contrails impact the entire hydrological cycle in the atmosphere by reducing the total water column and the cover by high- and low-level clouds. For normal traffic, the dehydration changes contrail RF by positive shortwave and negative longwave contributions on the order of 0.04 W m-2, with a small negative net RF. The total net RF from contrails and dehydration remains within the range of previous estimates.

  11. Dehydration effects from contrails in a coupled contrail-climate model

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Penner, J. E.; Chen, Y.; Zhou, C.; Graf, K.

    2015-07-01

    Uptake of water by contrails in ice-supersaturated air and release of water after ice particle advection and sedimentation dehydrates the atmosphere at flight levels and redistributes humidity mainly to lower levels. The dehydration is investigated by coupling a plume-scale contrail model with a global aerosol-climate model. The contrail model simulates all the individual contrails forming from global air traffic for meteorological conditions as defined by the climate model. The computed contrail-cirrus properties compare reasonably with theoretical concepts and observations. The mass of water in aged contrails may exceed 106 times the mass of water emitted from aircraft. Many of the ice particles sediment and release water in the troposphere, on average 700 m below the mean flight levels. Simulations with and without coupling are compared. The drying at contrail levels causes thinner and longer lived contrails with about 15 % reduced contrail radiative forcing (RF). The reduced RF from contrails is of the order 0.06 W m-2, slightly larger than estimated earlier because of higher soot emissions. For normal traffic, the RF from dehydration is small compared to interannual variability. A case with 100 times increased emissions is used to overcome statistical uncertainty. The contrails impact the entire hydrological cycle in the atmosphere by reducing the total water column and the cover of high and low-level clouds. For normal traffic, the dehydration changes contrail RF by positive shortwave and negative longwave contributions of order 0.04 W m-2, with a small negative net RF. The total net RF from contrails and dehydration remains within the range of previous estimates.

  12. Using the Convective Cloud Field Model (CCFM) to investigate aerosol-convection interactions in ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Kipling, Zak; Stier, Philip; Wagner, Till

    2014-05-01

    Convection plays an important role in the climate system through its effects on radiation, precipitation, large-scale dynamics and vertical transport of aerosols and trace gases. The effects of aerosols on the development of convective cloud and precipitation are a source of considerable uncertainty in current climate modelling. Most current global climate models use 'mass-flux' convection schemes, which represent the ensemble of convective clouds in a GCM column by a single 'mean' updraught. In addition to over-simplifying the representation of such clouds, this presents particular problems in the context of aerosol-convection interactions: firstly because the relationship between aerosol and the droplet size distribution depends on the vertical velocity distribution, about which little or no information is available, and secondly because the effects of convective transport and scavenging may vary nonlinearly over the ensemble (e.g. between precipitating and non-precipitating clouds and due to different loadings). The Convective Cloud Field Model (CCFM) addresses these limitations by simulating a spectrum of updraughts with different cross-sectional areas within each GCM column, based on the quasi-equilibrium approach of Arakawa and Schubert. For each cloud type, an entraining Lagrangian parcel model is initiated by perturbations at the surface, allowing a realistic vertical velocity to develop by cloud base so that detailed size-resolved microphysics can be represented within the cloud above. These different cloud types interact via competition for resolved-scale convective available potential energy (CAPE). Transport of water, aerosol and other tracers is calculated separately for each cloud type, allowing for different entrainment and scavenging behaviours. By using CCFM embedded within the ECHAM6-HAM aerosol-climate model, we show how this approach can both improve the distribution of convective precipitation events compared to a typical mass-flux scheme, and

  13. Evaluation of the tropospheric aerosol number concentrations simulated by two versions of the global model ECHAM5-HAM

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Kazil, J.; Feichter, J.

    2009-04-01

    Since its first version developed by Stier et al. (2005), the global aerosol-climate model ECHAM5-HAM has gone through further development and updates. The changes in the model include (1) a new time integration scheme for the condensation of the sulfuric acid gas on existing particles, (2) a new aerosol nucleation scheme that takes into account the charged nucleation caused by cosmic rays, and (3) a parameterization scheme explicitly describing the conversion of aerosol particles to cloud nuclei. In this work, simulations performed with the old and new model versions are evaluated against some measurements reported in recent years. The focus is on the aerosol size distribution in the troposphere. Results show that modifications in the parameterizations have led to significant changes in the simulated aerosol concentrations. Vertical profiles of the total particle number concentration (diameter > 3nm) compiled by Clarke et al. (2002) suggest that, over the Pacific in the upper free troposphere, the tropics are associated with much higher concentrations than the mid-latitude regions. This feature is more reasonably reproduced by the new model version, mainly due to the improved results of the nucleation mode aerosols. In the lower levels (2-5 km above the Earth's surface), the number concentrations of the Aitken mode particles are overestimated compared to both the Pacific data given in Clarke et al. (2002) and the vertical profiles over Europe reported by Petzold et al. (2007). The physical and chemical processes that have led to these changes are identified by sensitivity tests. References: Clarke and Kapustin: A Pacific aerosol survey - part 1: a decade of data on production, transport, evolution and mixing in the troposphere, J. Atmos. Sci., 59, 363-382, 2002. Petzold et al.: Perturbation of the European free troposphere aerosol by North American forest fire plumes during the ICARTT-ITOP experiment in summer 2004, Atmos. Chem. Phys., 7, 5105-5127, 2007

  14. Climate impact of biofuels in shipping: global model studies of the aerosol indirect effect.

    PubMed

    Righi, Mattia; Klinger, Carolin; Eyring, Veronika; Hendricks, Johannes; Lauer, Axel; Petzold, Andreas

    2011-04-15

    Aerosol emissions from international shipping are recognized to have a large impact on the Earth's radiation budget, directly by scattering and absorbing solar radiation and indirectly by altering cloud properties. New regulations have recently been approved by the International Maritime Organization (IMO) aiming at progressive reductions of the maximum sulfur content allowed in marine fuels from current 4.5% by mass down to 0.5% in 2020, with more restrictive limits already applied in some coastal regions. In this context, we use a global bottom-up algorithm to calculate geographically resolved emission inventories of gaseous (NO(x), CO, SO(2)) and aerosol (black carbon, organic matter, sulfate) species for different kinds of low-sulfur fuels in shipping. We apply these inventories to study the resulting changes in radiative forcing, attributed to particles from shipping, with the global aerosol-climate model EMAC-MADE. The emission factors for the different fuels are based on measurements at a test bed of a large diesel engine. We consider both fossil fuel (marine gas oil) and biofuels (palm and soy bean oil) as a substitute for heavy fuel oil in the current (2006) fleet and compare their climate impact to that resulting from heavy fuel oil use. Our simulations suggest that ship-induced surface level concentrations of sulfate aerosol are strongly reduced, up to about 40-60% in the high-traffic regions. This clearly has positive consequences for pollution reduction in the vicinity of major harbors. Additionally, such reductions in the aerosol loading lead to a decrease of a factor of 3-4 in the indirect global aerosol effect induced by emissions from international shipping.

  15. The Role of Atmospheric Aerosol Concentration on Deep Convective Precipitation: Cloud-resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Li, X.; Khain, A.; Mastsui, T.; Lang, S.; Simpson, J.

    2007-01-01

    Aerosols and especially their effect on clouds are one of the key components of the climate system and the hydrological cycle [Ramanathan et al., 20011. Yet, the aerosol effect on clouds remains largely unknown and the processes involved not well understood. A recent report published by the National Academy of Science states "The greatest uncertainty about the aerosol climate forcing - indeed, the largest of all the uncertainties about global climate forcing - is probably the indirect effect of aerosols on clouds NRC [2001]." The aerosol effect on clouds is often categorized into the traditional "first indirect (i.e., Twomey)" effect on the cloud droplet sizes for a constant liquid water path and the "semi-direct" effect on cloud coverage. The aerosol effect on precipitation processes, also known as the second type of aerosol indirect effect, is even more complex, especially for mixed-phase convective clouds. ln this paper, a cloud-resolving model (CRM) with detailed spectral-bin microphysics was used to examine the effect of aerosols on three different deep convective cloud systems that developed in different geographic locations: South Florida, Oklahoma and the Central Pacific. In all three cases, rain reaches the ground earlier for the low CCN (clean) case. Rain suppression is also evident in all three cases with high CCN (dirty) case. However, this suppression only occurs during the first hour of the simulations. During the mature stages of the simulations, the effects of increasing aerosol concentration range from rain suppression in the Oklahoma case, to almost no effect in the Florida case, to rain enhancement in the Pacific case. These results show the complexity of aerosol interactions with convection.

  16. Progress in Global Multicompartmental Modelling of DDT

    NASA Astrophysics Data System (ADS)

    Stemmler, I.; Lammel, G.

    2009-04-01

    input parameters. Furthermore, better resolution of some processes could improve model performance. References: Marsland S.J., Haak H., Jungclaus J.H., Latif M., Röske F. (2003): The Max-Planck-Institute global ocean/sea ice model with orthogonal curvilinear coordinates. Ocean Modelling 5, 91-127 Maier-Reimer E. , Kriest I., Segschneider J., Wetzel P. : The HAMburg Ocean Carbon Cycle Model HAMOCC 5.1 - Technical Description Release 1.1 (2005),Reports on Earth System Science 14 Stier P. , Feichter J. (2005), Kinne S., Kloster S., Vignati E., Wilson J.Ganzeveld L., Tegen I., Werner M., Blakanski Y., Schulz M., Boucher O., Minikin A., Petzold A.: The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys 5, 1125-1156 Semeena V.S., Feichter J., Lammel G. (2006): Impact of the regional climate and substance properties on the fate and atmospheric long-range transport of persistent organic pollutants - examples of DDT and γ-HCH. Atmos. Chem. Phys. 6, 1231-1248

  17. Carbonaceous aerosols recorded in a southeastern Tibetan glacier: analysis of temporal variations and model estimates of sources and radiative forcing

    DOE PAGES

    Wang, Mo; Xu, B.; Cao, J.; ...

    2015-02-02

    High temporal resolution measurements of black carbon (BC) and organic carbon (OC) covering the time period of 1956–2006 in an ice core over the southeastern Tibetan Plateau show a distinct seasonal dependence of BC and OC with higher respective concentrations but a lower OC / BC ratio in the non-monsoon season than during the summer monsoon. We use a global aerosol-climate model, in which BC emitted from different source regions can be explicitly tracked, to quantify BC source–receptor relationships between four Asian source regions and the southeastern Tibetan Plateau as a receptor. The model results show that South Asia hasmore » the largest contribution to the present-day (1996–2005) mean BC deposition at the ice-core drilling site during the non-monsoon season (October to May) (81%) and all year round (74%), followed by East Asia (14% to the non-monsoon mean and 21% to the annual mean). The ice-core record also indicates stable and relatively low BC and OC deposition fluxes from the late 1950s to 1980, followed by an overall increase to recent years. This trend is consistent with the BC and OC emission inventories and the fuel consumption of South Asia (as the primary contributor to annual mean BC deposition). Moreover, the increasing trend of the OC / BC ratio since the early 1990s indicates a growing contribution of coal combustion and/or biomass burning to the emissions. The estimated radiative forcing induced by BC and OC impurities in snow has increased since 1980, suggesting an increasing potential influence of carbonaceous aerosols on the Tibetan glacier melting and the availability of water resources in the surrounding regions. Our study indicates that more attention to OC is merited because of its non-negligible light absorption and the recent rapid increases evident in the ice-core record.« less

  18. Carbonaceous aerosols recorded in a southeastern Tibetan glacier: analysis of temporal variations and model estimates of sources and radiative forcing

    SciTech Connect

    Wang, Mo; Xu, B.; Cao, J.; Tie, X.; Wang, Hailong; Zhang, Rudong; Qian, Yun; Rasch, Philip J.; Zhao, Shuyu; Wu, Guangjian; Zhao, Huabiao; Joswiak, Daniel R.; Li, Jiule; Xie, Ying

    2015-02-02

    High temporal resolution measurements of black carbon (BC) and organic carbon (OC) covering the time period of 1956–2006 in an ice core over the southeastern Tibetan Plateau show a distinct seasonal dependence of BC and OC with higher respective concentrations but a lower OC / BC ratio in the non-monsoon season than during the summer monsoon. We use a global aerosol-climate model, in which BC emitted from different source regions can be explicitly tracked, to quantify BC source–receptor relationships between four Asian source regions and the southeastern Tibetan Plateau as a receptor. The model results show that South Asia has the largest contribution to the present-day (1996–2005) mean BC deposition at the ice-core drilling site during the non-monsoon season (October to May) (81%) and all year round (74%), followed by East Asia (14% to the non-monsoon mean and 21% to the annual mean). The ice-core record also indicates stable and relatively low BC and OC deposition fluxes from the late 1950s to 1980, followed by an overall increase to recent years. This trend is consistent with the BC and OC emission inventories and the fuel consumption of South Asia (as the primary contributor to annual mean BC deposition). Moreover, the increasing trend of the OC / BC ratio since the early 1990s indicates a growing contribution of coal combustion and/or biomass burning to the emissions. The estimated radiative forcing induced by BC and OC impurities in snow has increased since 1980, suggesting an increasing potential influence of carbonaceous aerosols on the Tibetan glacier melting and the availability of water resources in the surrounding regions. Our study indicates that more attention to OC is merited because of its non-negligible light absorption and the recent rapid increases evident in the ice-core record.

  19. Reconstruction of the Tambora forcing with global aerosol models : Challenges and limitations

    NASA Astrophysics Data System (ADS)

    Khodri, Myriam; Zanchettin, Davide; Timmreck, Claudia

    2016-04-01

    It is now generally recognised that volcanic eruptions have an important effect on climate variability from inter-annual to decadal timescales. For the largest tropical volcanic eruptions of the last millennium, simulated volcanic surface cooling derived from climate models often disagrees with the cooling seen in tree-ring-based proxies. Furthermore, cooling estimates from simulations show large uncertainties. Such disagreement can be related to several sources, including inconsistency of the currently available volcanic forcing datasets, unrealistic modelled volcanic forcing, insufficient representation of relevant climate processes, and different background climate states simulated at the time of the eruption. In particular, for eruptions that occurred before the observational period forcing characteristics related to the eruption magnitude and stratospheric aerosol properties are deduced from indirect evidences. So, while climatically relevant forcing properties for recent volcanic eruptions are relatively well constrained by direct observations, large uncertainties remain regarding processes of aerosol formation and evolution in the stratosphere after large tropical eruptions of the remote past. Several coordinated modelling assessments have been defined to frame future modeling activities and constrain the above-mentioned uncertainties. Among these, the sixth phase of the Coupled Model Intercomparison Project (CMIP6) has endorsed a multi-model assessment focused on the climatic response to strong volcanic eruptions (VolMIP). VolMIP defines a protocol for idealized volcanic-perturbation experiments to improve comparability among climate model results. Identification of a consensual volcanic forcing dataset for the 1815 Tambora eruption is a key step of VolMIP, as it is the largest-magnitude volcanic eruption of the past five centuries and reference for the VolMIP core experiments. Therefore, as a first key step, five current/state-of-the-art global aerosol

  20. Development towards a global operational aerosol consensus: basic climatological characteristics of the International Cooperative for Aerosol Prediction Multi-Model Ensemble (ICAP-MME)

    NASA Astrophysics Data System (ADS)

    Sessions, W. R.; Reid, J. S.; Benedetti, A.; Colarco, P. R.; da Silva, A.; Lu, S.; Sekiyama, T.; Tanaka, T. Y.; Baldasano, J. M.; Basart, S.; Brooks, M. E.; Eck, T. F.; Iredell, M.; Hansen, J. A.; Jorba, O. C.; Juang, H.-M. H.; Lynch, P.; Morcrette, J.-J.; Moorthi, S.; Mulcahy, J.; Pradhan, Y.; Razinger, M.; Sampson, C. B.; Wang, J.; Westphal, D. L.

    2014-06-01

    Over the past several years, there has been a rapid development in the number and quality of global aerosol models intended for operational forecasting use. Indeed, most centers with global numerical weather prediction (NWP) capabilities have some program for aerosol prediction. These aerosol models typically have differences in their underlying meteorology as well as aerosol sources, sinks, microphysics and transformations. However, like similar diversity in aerosol climate models, the aerosol forecast models have fairly similar overall bulk error statistics for aerosol optical thickness (AOT)-one of the few aerosol metrics that is globally available. Experience in climate and weather prediction has shown that in situations such as this where there are several independent models, a multi-model ensemble or consensus will be top performing in many key error metrics. Further, multi-model ensembles provide a highly valuable tool for forecasters attempting to predict severe aerosol events. Here we present the first steps in developing a global multi-model aerosol forecasting ensemble intended for eventual operational and basic research use. Drawing from members of the International Cooperative for Aerosol Prediction (ICAP) latest generation of quasi-operational aerosol models, five day AOT forecasts are analyzed for December 2011 through November 2012 from four institutions: ECMWF, JMA, NASA GSFC, and NRL/FNMOC. For dust, we also include the NOAA NGAC product in our analysis. The Barcelona Supercomputing Centre (NMMC) and UK Met office dust product have also recent become available with ICAP, but have insufficient data to be included in this analysis period. A simple consensus ensemble of member and mean AOT fields for modal species (e.g., fine and coarse mode, and a separate dust ensemble) is used to create the ICAP Multi-Model Ensemble (ICAP-MME). The ICAP-MME is run daily at 0Z for 6 hourly forecasts out to 120 h. Basing metrics on comparisons to 21 regionally

  1. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2008-01-01

    Aerosols and especially their effect on clouds are one of the key components of the climate system and the hydrological cycle [Ramanathan et al., 2001]. Yet, the aerosol effect on clouds remains largely unknown and the processes involved not well understood. A recent report published by the National Academy of Science states "The greatest uncertainty about the aerosol climate forcing - indeed, the largest of all the uncertainties about global climate forcing - is probably the indirect effect of aerosols on clouds [NRC, 2001]." The aerosol effect on clouds is often categorized into the traditional "first indirect (i.e., Twomey)" effect on the cloud droplet sizes for a constant liquid water path [Twomey, 1977] and the "semi-direct" effect on cloud coverage [e.g., Ackerman et al ., 2001]." Enhanced aerosol concentrations can also suppress warm rain processes by producing a narrow droplet spectrum that inhibits collision and coalescence processes [e.g., Squires and Twomey, 1961; Warner and Twomey, 1967; Warner, 1968; Rosenfeld, 19991. The aerosol effect on precipitation processes, also known as the second type of aerosol indirect effect [Albrecht, 1989], is even more complex, especially for mixed-phase convective clouds. Table 1 summarizes the key observational studies identifying the microphysical properties, cloud characteristics, thermodynamics and dynamics associated with cloud systems from high-aerosol continental environments. For example, atmospheric aerosol concentrations can influence cloud droplet size distributions, warm-rain process, cold-rain process, cloud-top height, the depth of the mixed phase region, and occurrence of lightning. In addition, high aerosol concentrations in urban environments could affect precipitation variability by providing an enhanced source of cloud condensation nuclei (CCN). Hypotheses have been developed to explain the effect of urban regions on convection and precipitation [van den Heever and Cotton, 2007 and Shepherd, 2005

  2. Vertical and Spatial Profiling of Arctic Black Carbon on the North Slope of Alaska 2015: Comparison of Model and Observation

    NASA Astrophysics Data System (ADS)

    Sedlacek, A. J., III; Feng, Y.; Biraud, S.; Springston, S. R.

    2015-12-01

    One of the major issues confronting aerosol climate simulations of the Arctic and Antarctic Cryospheres is the lack of detailed data on the vertical and spatial distribution of aerosols with which to test these models. This is due, in part, to the inherent difficulty of conducting such measurements in extreme environments. One class of under measured radiative forcing agents in the Polar Region is the absorbing aerosol - black carbon and brown carbon. In particular, vertical profile information of BC is critical in reducing uncertainty in model assessment of aerosol radiative impact at high latitudes. During the summer of 2015, a Single-Particle Soot Photometer (SP2) was deployed aboard the Department of Energy (DOE) Gultstream-1 (G-1) aircraft to measure refractory BC (rBC) concentrations as part of the DOE-sponsored ACME-V (ARM Airborne Carbon Measurements) campaign. This campaign was conducted from June through to mid-September along the North Slope of Alaska and was punctuated by vertical profiling over 5 sites (Atquasuk, Barrow, Ivotuk, Oliktok, and Toolik). In addition, measurement of CO, CO2 and CH4were also taken to provide information on the spatial and seasonal differences in GHG sources and how these sources correlate with BC. Lastly, these aerosol and gas measurements provide an important dataset to assess the representativeness of ground sites at regional scales. Comparisons between observations and a global climate model (CAM5) simulations will be agumented with a discussion on the capability of the model to capture observed monthly mean profiles of BC and stratified aerosol layers. Additionally, the ability of the SP2 to partition rBC-containing particles into nascent or aged species allows an evaluation of how well the CAM5 model captures aging of long distant transported carbonaceous aerosols. Finally model sensitivity studies will be aso be presented that investigated the relative importance of the different emission sectors to the summer Arctic

  3. Global modelling of direct and indirect effects of sea spray aerosol using a source function encapsulating wave state

    NASA Astrophysics Data System (ADS)

    Partanen, A.-I.; Dunne, E. M.; Bergman, T.; Laakso, A.; Kokkola, H.; Ovadnevaite, J.; Sogacheva, L.; Baisnée, D.; Sciare, J.; Manders, A.; O'Dowd, C.; de Leeuw, G.; Korhonen, H.

    2014-11-01

    Recently developed parameterizations for the sea spray aerosol source flux, encapsulating wave state, and its organic fraction were incorporated into the aerosol-climate model ECHAM-HAMMOZ to investigate the direct and indirect radiative effects of sea spray aerosol particles. Our simulated global sea salt emission of 805 Tg yr-1 (uncertainty range 378-1233 Tg yr-1) was much lower than typically found in previous studies. Modelled sea salt and sodium ion concentrations agreed relatively well with measurements in the smaller size ranges at Mace Head (annual normalized mean model bias -13% for particles with vacuum aerodynamic diameter Dva < 1 μm), Point Reyes (-29% for particles with aerodynamic diameter Da < 2.5 μm) and Amsterdam Island (-52% for particles with Da < 1 μm) but the larger sizes were overestimated (899% for particles with 2.5 μm < Da < 10 μm) at Amsterdam Island. This suggests that at least the high end of the previous estimates of sea spray mass emissions is unrealistic. On the other hand, the model clearly underestimated the observed concentrations of organic or total carbonaceous aerosol at Mace Head (-82%) and Amsterdam Island (-68%). The large overestimation (212%) of organic matter at Point Reyes was due to the contribution of continental sources. At the remote Amsterdam Island site, the organic concentration was underestimated especially in the biologically active months, suggesting a need to improve the parameterization of the organic sea spray fraction. Globally, the satellite-retrieved AOD over the oceans, using PARASOL data, was underestimated by the model (means over ocean 0.16 and 0.10, respectively); however, in the pristine region around Amsterdam Island the measured AOD fell well within the simulated uncertainty range. The simulated sea spray aerosol contribution to the indirect radiative effect was positive (0.3 W m-2), in contrast to previous studies. This positive effect was ascribed to the tendency of sea salt aerosol to

  4. Development and basic evaluation of a prognostic aerosol scheme (v1) in the CNRM Climate Model CNRM-CM6

    NASA Astrophysics Data System (ADS)

    Michou, M.; Nabat, P.; Saint-Martin, D.

    2015-03-01

    , correlation coefficients higher than 0.5 and lower model variance than observed. A large interannual variability can also be seen in the CALIOP vertical profiles over certain regions of the world. Overall, this prognostic aerosol scheme appears promising for aerosol-climate studies. There is room, however, for implementing more complex parameterisations in relation to aerosols.

  5. The Role of Atmospheric Aerosol Concentration on Deep Convective Precipitation: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2010-01-01

    Aerosols and especially their effect on clouds are one of the key components of the climate system and the hydrological cycle [Ramanathan et al., 2001]. Yet, the aerosol effect on clouds remains largely unknown and the processes involved not well understood. A recent report published by the National Academy of Science states "The greatest uncertainty about the aerosol climate forcing - indeed, the largest of all the uncertainties about global climate forcing - is probably the indirect effect of aerosols on clouds NRC [2001]." The aerosol effect on Clouds is often categorized into the traditional "first indirect (i.e., Twomey)" effect on the cloud droplet sizes for a constant liquid water path and the "semi-direct" effect on cloud coverage. The aerosol effect on precipitation processes, also known as the second type of aerosol indirect effect, is even more complex, especially for mixed-phase convective clouds. In this paper, a cloud-resolving model (CRM) with detailed spectral-bin microphysics was used to examine the effect of aerosols on three different deep convective cloud systems that developed in different geographic locations: South Florida, Oklahoma and the Central Pacific, In all three cases, rain reaches the ground earlier for the low CCN (clean) case. Rain suppression is also evident in all three cases with high CCN (dirty) case. However, this suppression only occurs during the first hour of the simulations. During the mature stages of the simulations, the effects of increasing aerosol concentration range from rain suppression in the Oklahoma case, to almost no effect in the Florida case, to rain enhancement in the Pacific case. These results show the complexity of aerosol interactions with convection. The model results suggest that evaporative cooling is a key process in determining whether high CCN reduces or enhances precipitation. Stronger evaporative cooling can produce a stronger cold pool and thus stronger low-level convergence through interactions

  6. Satellite-Derived Aerosol Climate Data Records in the ESA Aerosol_Cci Project: From ERS-2, Envisat to Sentinel-3

    NASA Astrophysics Data System (ADS)

    de Leeuw, Gerrit; Holzer-Popp, Thomas; North, Peter R. J.; Heckel, Andreas; Pinnock, Simon

    2015-12-01

    With the focus of Sentinel-3 on ocean applications and services, important parts of the payload are the Sea and Land Surface Temperature Radiometer (SLSTR) and the Ocean Land Colour Instrument (OLCI). Apart from Ocean applications, these instruments are also very important for atmospheric observations and in particular for aerosol retrieval. This is the reason why the predecessor instruments AATSR and MERIS have extensively been used in the ESA Climate Change Initiative project Aerosol_cci. In this contribution a brief overview of the current status of the Aerosol_cci project is presented. Full-mission time series of ATSR-2 and AATSR have been processed to provide 17 years of global aerosol information. Selected examples of recent achievements are presented. The experience with ATSR-2, AATSR and MERIS will be used to continue the current time series with SLSTR and OLCI.

  7. Aerosol direct radiative effects over the northwest Atlantic, northwest Pacific, and North Indian Oceans: estimates based on in-situ chemical and optical measurements and chemical transport modeling

    NASA Astrophysics Data System (ADS)

    Bates, T. S.; Anderson, T. L.; Baynard, T.; Bond, T.; Boucher, O.; Carmichael, G.; Clarke, A.; Erlick, C.; Guo, H.; Horowitz, L.; Howell, S.; Kulkarni, S.; Maring, H.; McComiskey, A.; Middlebrook, A.; Noone, K.; O'Dowd, C. D.; Ogren, J.; Penner, J.; Quinn, P. K.; Ravishankara, A. R.; Savoie, D. L.; Schwartz, S. E.; Shinozuka, Y.; Tang, Y.; Weber, R. J.; Wu, Y.

    2006-05-01

    calculations by observational inputs increases the clear-sky, 24-h averaged AOD (34±8%), top of atmosphere (TOA) DRE (32±12%), and TOA direct climate forcing of aerosols (DCF - change in radiative flux due to anthropogenic aerosols) (37±7%) relative to values obtained with "a priori" parameterizations of aerosol loadings and properties (GFDL RTM). The resulting constrained clear-sky TOA DCF is -3.3±0.47, -14±2.6, -6.4±2.1 Wm-2 for the NIO, NWP, and NWA, respectively. With the use of constrained quantities (extensive and intensive parameters) the calculated uncertainty in DCF was 25% less than the "structural uncertainties" used in the IPCC-2001 global estimates of direct aerosol climate forcing. Such comparisons with observations and resultant reductions in uncertainties are essential for improving and developing confidence in climate model calculations incorporating aerosol forcing.

  8. Explicit entrainment parameterization in the general circulation model ECHAM5-HAM

    NASA Astrophysics Data System (ADS)

    Siegenthaler-Le Drian, Colombe; Spichtinger, Peter; Lohmann, Ulrike

    2010-05-01

    stratocumulus when applying new parameterization. Moreover, even if the entrainment parameterization does not explicitly depend on the number of cloud droplets, the steep increase of liquid water path with increasing cloud droplet number concentration is reduced. Furthermore, the turbulent kinetic energy (TKE) is crucially affected. First, its vertical profile is smoothed compared to the huge values in the standard version. Moreover, due to the explicit addition of radiative cooling in the buoyancy flux, the maximum of TKE occurs at cloud top (as in reality) and not at cloud base (as in the standard model version). Finally, the trade wind cumulus are better represented in terms of cloud cover. Indeed, the TKE source at cloud top enhances the latent heat flux, triggering the convective routine in shallow cumulus regions. References [Lenderink et al., 2000] Lenderink, G., Van Meijgaard, E., and Holtslag, A. M. (2000). Evaluation of the ECHAM4 cloud-turbulence scheme for stratocumulus. Meteorol. Z., 9(1):0041-47. [Lohmann et al., 2007] Lohmann, U., Stier, P., Hoose, C. et al. (2007). Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-HAM. Atmos. Chem. Phys., 7:3425-3446. [Quaas et al., 2009] Quaas, J., Ming, Y., Menon, S. et al. (2009). Aerosol indirect effects - general circulation model intercomparison and evaluation with satellite data. Atmos. Chem. Phys. Discuss., 9:12731-12779. [Roeckner et al., 2003] Roeckner, E., Bäuml, G., Bonaventura, L. et al. (2003). The atmospheric general circulation modell echam5, part I: Model description. Technical Report 349, Max-Planck-Institute for Meteorology, Hamburg,Germany. [Stier et al., 2005] Stier, P., Feichter, J., Kinne, S. et al. (2005). The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys., 5:1125-1156. [Turton and Nicholls, 1987] Turton, J. D. and Nicholls, S. (1987). A study of the diurnal variation of stratocumulus using a multiple mixed layer model. Quart. J. Roy. Meteor. Soc., 113:969-1009.

  9. Uncertainty associated with convective wet removal of entrained aerosols in a global climate model

    NASA Astrophysics Data System (ADS)

    Croft, B.; Pierce, J. R.; Martin, R. V.; Hoose, C.; Lohmann, U.

    2012-11-01

    The uncertainties associated with the wet removal of aerosols entrained above convective cloud bases are investigated in a global aerosol-climate model (ECHAM5-HAM) under a set of limiting assumptions for the wet removal of the entrained aerosols. The limiting assumptions for the wet removal of entrained aerosols are negligible scavenging and vigorous scavenging (either through activation, with size-dependent impaction scavenging, or with the prescribed fractions of the standard model). To facilitate this process-based study, an explicit representation of cloud-droplet-borne and ice-crystal-borne aerosol mass and number, for the purpose of wet removal, is introduced into the ECHAM5-HAM model. This replaces and is compared with the prescribed cloud-droplet-borne and ice-crystal-borne aerosol fraction scavenging scheme of the standard model. A 20% to 35% uncertainty in simulated global, annual mean aerosol mass burdens and optical depth (AOD) is attributed to different assumptions for the wet removal of aerosols entrained above convective cloud bases. Assumptions about the removal of aerosols entrained above convective cloud bases control modeled upper tropospheric aerosol concentrations by as much as one order of magnitude. Simulated aerosols entrained above convective cloud bases contribute 20% to 50% of modeled global, annual mean aerosol mass convective wet deposition (about 5% to 10% of the total dry and wet deposition), depending on the aerosol species, when including wet scavenging of those entrained aerosols (either by activation, size-dependent impaction, or with the prescribed fraction scheme). Among the simulations, the prescribed fraction and size-dependent impaction schemes yield the largest global, annual mean aerosol mass convective wet deposition (by about two-fold). However, the prescribed fraction scheme has more vigorous convective mixed-phase wet removal (by two to five-fold relative to the size-dependent impaction scheme) since nearly all

  10. Manipulating ship fuel sulfur content and modeling the effects on air quality and climate

    NASA Astrophysics Data System (ADS)

    Partanen, Antti-Ilari; Laakso, Anton; Schmidt, Anja; Kokkola, Harri; Kuokkanen, Tuomas; Kerminen, Veli-Matti; Lehtinen, Kari E. J.; Laakso, Lauri; Korhonen, Hannele

    2013-04-01

    Aerosol emissions from international shipping are known to cause detrimental health effects on people mainly via increased lung cancer and cardiopulmonary diseases. On the other hand, the aerosol particles from the ship emissions modify the properties of clouds and are believed to have a significant cooling effect on the global climate. In recent years, aerosol emissions from shipping have been more strictly regulated in order to improve air quality and thus decrease the mortality due to ship emissions. Decreasing the aerosol emissions from shipping is projected to decrease their cooling effect, which would intensify the global warming even further. In this study, we use a global aerosol-climate model ECHAM5.5-HAM2 to test if continental air quality can be improved while still retaining the cooling effect from shipping. The model explicitly resolves emissions of aerosols and their pre-cursor gases. The model also calculates the interaction between aerosol particles and clouds, and can thus predict the changes in cloud properties due to aerosol emissions. We design and simulate a scenario where ship fuel sulfur content is strictly limited to 0.1% near all coastal regions, but doubled in the open oceans from the current global mean value of 2.7% (geo-ships). This scenario is compared to three other simulations: 1) No shipping emissions at all (no-ships), 2) present-day shipping emissions (std-ships) and 3) a future scenario where sulfur content is limited to 0.1% in the coastal zones and to 0.5% in the open ocean (future-ships). Global mean radiative flux perturbation (RFP) in std-ships compared to no-ships is calculated to be -0.4 W m-2, which is in the range of previous estimates for present-day shipping emissions. In the geo-ships simulation the corresponding global mean RFP is roughly equal, but RFP is spatially distributed more on the open oceans, as expected. In future-ships the decreased aerosol emissions provide weaker cooling effect of only -0.1 W m-2. In

  11. Integrated Analyses of Multiple Worldwide Aerosol Mass Spectrometer Datasets for Improved Understanding of Aerosol Sources and Processes and for Comparison with Global Models

    SciTech Connect

    Zhang, Qi; Jose, Jimenez Luis

    2014-04-28

    The AMS is the only current instrument that provides real-time, quantitative, and size-resolved data on submicron non-refractory aerosol species with a time resolution of a few minutes or better. The AMS field data are multidimensional and massive, containing extremely rich information on aerosol chemistry, microphysics and dynamics—basic information that is required to evaluate and quantify the radiative climate forcing of atmospheric aerosols. The high time resolution of the AMS data also reveals details of aerosol dynamic variations that are vital to understanding the physico-chemical processes of atmospheric aerosols that govern aerosol properties relevant to the climate. There are two primary objectives of this 3-year project. Our first objective is to perform highly integrated analysis of dozens of AMS datasets acquired from various urban, forested, coastal, marine, mountain peak, and rural/remote locations around the world and synthesize and inter-compare results with a focus on the sources and the physico-chemical processes that govern aerosol properties relevant to aerosol climate forcing. Our second objective is to support our collaboration with global aerosol modelers, in which we will supply the size-resolved aerosol composition and temporal variation data (via a public web interface) and our analysis results for use in model testing and validation and for translation of the rich AMS database into model constraints that can improve climate forcing simulations. Several prominent global aerosol modelers have expressed enthusiastic support for this collaboration. The specific tasks that we propose to accomplish include 1) to develop, validate, and apply multivariate analysis techniques for improved characterization and source apportionment of organic aerosols; 2) to evaluate aerosol source regions and relative contributions based on back-trajectory integration (PSCF method); 3) to summarize and synthesize submicron aerosol information, including

  12. A 4-D Climatology (1979-2009) of the Monthly Tropospheric Aerosol Optical Depth Distribution over the Mediterranean Region from a Comparative Evaluation and Blending of Remote Sensing and Model Products

    NASA Technical Reports Server (NTRS)

    Nabat, P.; Somot, S.; Mallet, M.; Chiapello, I; Morcrette, J. J.; Solomon, F.; Szopa, S.; Dulac, F; Collins, W.; Ghan, S.; Horowitz, L. W.; Lamarque, J. F.; Lee, Y. H.; Naik, V.; Nagashima, T.; Shindell, D.; Skeie, R.

    2013-01-01

    aerosols showing a large vertical spread, and other continental and marine aerosols which are confined in the boundary layer. From this compilation, we propose a 4-D blended product from model and satellite data, consisting in monthly time series of 3-D aerosol distribution at a 50 km horizontal resolution over the Euro-Mediterranean marine and continental region for the 2003-2009 period. The product is based on the total AOD from AQUA/MODIS, apportioned into sulfates, black and organic carbon from the MACC reanalysis, and into dust and sea-salt aerosols from RegCM-4 simulations, which are distributed vertically based on CALIOP climatology.We extend the 2003-2009 reconstruction to the past up to 1979 using the 2003-2009 average and applying the decreasing trend in sulfate aerosols from LMDz-OR-INCA, whose AOD trends over Europe and the Mediterranean are median among the ACCMIP models. Finally optical properties of the different aerosol types in this region are proposed from Mie calculations so that this reconstruction can be included in regional climate models for aerosol radiative forcing and aerosol-climate studies.

  13. Models and role models.

    PubMed

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers.

  14. Models, Fiction, and Fictional Models

    NASA Astrophysics Data System (ADS)

    Liu, Chuang

    2014-03-01

    The following sections are included: * Introduction * Why Most Models in Science Are Not Fictional * Typically Fictional Models in Science * Modeling the Unobservable * Fictional Models for the Unobservable? * References

  15. Mental Models, Conceptual Models, and Modelling.

    ERIC Educational Resources Information Center

    Greca, Ileana Maria; Moreira, Marco Antonio

    2000-01-01

    Reviews science education research into representations constructed by students in their interactions with the world, its phenomena, and artefacts. Features discussions of mental models, conceptual models, and the activity of modeling. (Contains 30 references.) (Author/WRM)

  16. Supermatrix models

    SciTech Connect

    Yost, S.A.

    1991-05-01

    Radom matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described.

  17. ENTRAINMENT MODELS

    EPA Science Inventory

    This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...

  18. Radiation Models

    ERIC Educational Resources Information Center

    James, W. G. G.

    1970-01-01

    Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

  19. Hydrological models are mediating models

    NASA Astrophysics Data System (ADS)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  20. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  1. Aerosol effects on summer monsoon over Asia during 1980s and 1990s

    NASA Astrophysics Data System (ADS)

    Tsai, I.-Chun; Wang, Wei-Chyung; Hsu, Huang-Hsiung; Lee, Wei-Liang

    2016-10-01

    The Community Earth System Model is used to study the aerosol climate effects during the 1980s and 1990s in which the anthropogenic SO2 emissions decreased in North America and Western Europe and increased in East and South Asia. From the 100 year simulations, aerosol forcing results in cooler (-0.13 K) and drier (-0.01 mm/day) atmosphere with less shortwave radiation flux at the surface (-0.37 W/m2). The clear-sky shortwave radiation flux decreased over East Asia (-0.81 W/m2) and South Asia (-1.09 W/m2), but increased over Western Europe (+1.16 W/m2) and North America (+0.39 W/m2), consistent with aerosol loading changes. While changes in spatial distributions of all-sky shortwave radiation and surface temperature are closely related to cloud changes, the changes in wind and precipitation do not correspond to aerosol loading changes, indicating the complexity of aerosol-cloud circulation interactions. The East and South Asia monsoons were generally weakened due mainly to southward shift of the 200 hPa East Asia Jet (EAJ) and decrease in 850 hPa winds; annual precipitation decreased by 2% in South Asia but increased by 2% in Yangtze-Huai River Valley over East Asia. The uncertainties associated with aerosol climate effects are addressed within the context of model variability and the global warming effect. For the latter, while the aerosol effects decrease the greenhouse warming on the global mean, the regional responses are different. Nevertheless, the characteristics of aerosol climate effects, including the southward 200 hPa EAJ and weakened South Asia monsoon, still persist when the climate becomes warmer, although the strength and the geographical distribution are slightly modulated.

  2. Climate Models

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  3. Phenomenological models

    SciTech Connect

    Braby, L.A.

    1990-09-01

    The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. A range of models covering different endpoints and phenomena has developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. 43 refs., 13 figs.

  4. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  5. Ventilation Model

    SciTech Connect

    V. Chipman

    2002-10-05

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post

  6. Model Selection for Geostatistical Models

    SciTech Connect

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  7. Modeling Sunspots

    ERIC Educational Resources Information Center

    Oh, Phil Seok; Oh, Sung Jin

    2013-01-01

    Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…

  8. Dispersion Modeling.

    ERIC Educational Resources Information Center

    Budiansky, Stephen

    1980-01-01

    This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)

  9. Qualitative modeling.

    PubMed

    Forbus, Kenneth D

    2011-07-01

    Qualitative modeling concerns the representations and reasoning that people use to understand continuous aspects of the world. Qualitative models formalize everyday notions of causality and provide accounts of how to ground symbolic, relational representations in perceptual processes. This article surveys the basic ideas of qualitative modeling and their applications from a cognitive science perspective. It describes the basic principles of qualitative modeling, and a variety of qualitative representations that have been developed for quantities and for relationships between them, providing a kind of qualitative mathematics. Three ontological frameworks for organizing modeling knowledge (processes, components, and field) are summarized, along with research on automatically assembling models for particular tasks from such knowledge. Qualitative simulation and how it carves up time into meaningful units is discussed. We discuss several accounts of causal reasoning about dynamical systems, based on different choices of qualitative mathematics and ontology. Qualitative spatial reasoning is explored, both in terms of relational systems and visual reasoning. Applications of qualitative models of particular interest to cognitive scientists are described, including how they have been used to capture the expertise of scientists and engineers and how they have been used in education. Open questions and frontiers are also discussed, focusing on relationships between ideas developed in the qualitative modeling community and other areas of cognitive science. WIREs Cogni Sci 2011 2 374-391 DOI: 10.1002/wcs.115 For further resources related to this article, please visit the WIREs website.

  10. Turbulence modeling

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge E.

    1995-01-01

    The objective of this work is to develop, verify, and incorporate the baseline two-equation turbulence models which account for the effects of compressibility into the three-dimensional Reynolds averaged Navier-Stokes (RANS) code and to provide documented descriptions of the models and their numerical procedures so that they can be implemented into 3-D CFD codes for engineering applications.

  11. Climate models and model evaluation

    SciTech Connect

    Gates, W.L.

    1994-12-31

    This brief overview addresses aspects of the nature, uses, evaluation and limitations of climate models. A comprehensive global modeling capability has been achieved only for the physical climate system, which is characterized by processes that serve to transport and exchange momentum, heat and moisture within and between the atmosphere, ocean and land surface. The fundamental aim of climate modeling, and the justification for the use of climate models, is the need to achieve a quantitative understanding of the operation of the climate system and to exploit any potential predictability that may exist.

  12. OSPREY Model

    SciTech Connect

    Veronica J. Rutledge

    2013-01-01

    The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to

  13. Model hydrographs

    USGS Publications Warehouse

    Mitchell, W.D.

    1972-01-01

    Model hydrographs are composed of pairs of dimensionless ratios, arrayed in tabular form, which, when modified by the appropriate values of rainfall exceed and by the time and areal characteristics of the drainage basin, satisfactorily represent the flood hydrograph for the basin. Model bydrographs are developed from a dimensionless translation hydrograph, having a time base of T hours and appropriately modified for storm duration by routing through reservoir storage, S=kOx. Models fall into two distinct classes: (1) those for which the value of x is unity and which have all the characteristics of true unit hydrographs and (2) those for which the value of x is other than unity and to which the unit-hydrograph principles of proportionality and superposition do not apply. Twenty-six families of linear models and eight families of nonlinear models in tabular form from the principal subject of this report. Supplemental discussions describe the development of the models and illustrate their application. Other sections of the report, supplemental to the tables, describe methods of determining the hydrograph characteristics, T, k, and x, both from observed hydrograph and from the physical characteristics of the drainage basin. Five illustrative examples of use show that the models, when properly converted to incorporate actual rainfall excess and the time and areal characteristics of the drainage basins, do indeed satisfactorily represent the observed flood hydrographs for the basins.

  14. Stereometric Modelling

    NASA Astrophysics Data System (ADS)

    Grimaldi, P.

    2012-07-01

    These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  15. A Model for Math Modeling

    ERIC Educational Resources Information Center

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  16. Anchor Modeling

    NASA Astrophysics Data System (ADS)

    Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

    Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

  17. Programming models

    SciTech Connect

    Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim

    2011-01-14

    A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

  18. Model Lungs.

    ERIC Educational Resources Information Center

    Taylor, Emma

    1991-01-01

    A cheap and simple model that can be made and used by pupils to study the human breathing mechanism is presented. A list of needed materials, procedures for construction, possible refinements, and method of use are included. (KR)

  19. Micromolecular modeling

    NASA Technical Reports Server (NTRS)

    Guillet, J. E.

    1984-01-01

    A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.

  20. Environmental Modeling

    EPA Pesticide Factsheets

    EPA's modeling community is working to gain insights into certain parts of a physical, biological, economic, or social system by conducting environmental assessments for Agency decision making to complex environmental issues.

  1. Energy Models

    EPA Science Inventory

    Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...

  2. PREDICTIVE MODELS

    SciTech Connect

    Ray, R.M. )

    1986-12-01

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2) carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3) in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4) polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5) steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  3. Model selection for geostatistical models.

    PubMed

    Hoeting, Jennifer A; Davis, Richard A; Merton, Andrew A; Thompson, Sandra E

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is often ignored in the selection of explanatory variables, and this can influence model selection results. For example, the importance of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often-used traditional approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also apply the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored. R software to implement the geostatistical model selection methods described in this paper is available in the Supplement.

  4. Scalable Models Using Model Transformation

    DTIC Science & Technology

    2008-07-13

    huge number of web documents. We have created a simplified demo using 5 worker machines in the Ptolemy II modeling and simulation environment [3], as...the pattern of the transformation rule matches any subgraph of the input model. When the TransformationRule actor is opened in the Ptolemy II GUI...tool developed in the Ptolemy II frame- work, existing tools include AGG [14], PROGRES [15], AToM3 [16], FUJABA [17], VIATRA2 [18], and GReAT [19

  5. Modeling reality

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.

  6. Reflectance Modeling

    NASA Technical Reports Server (NTRS)

    Smith, J. A. (Principal Investigator)

    1985-01-01

    The overall goal of this work has been to develop a set of computational tools and media abstractions for the terrain bidirectional reflectance problem. The modeling of soil and vegetation surfaces has been emphasized with a gradual increase in the complexity of the media geometries treated. Pragmatic problems involved in the combined modeling of soil, vegetation, and atmospheric effects have been of interest and one of the objectives has been to describe the canopy reflectance problem in a classical radiative transfer sense permitting easier inclusion of our work by other workers in the radiative transfer field.

  7. Supernova models

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1980-01-01

    Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the /sup 56/Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed.

  8. Ensemble Models

    EPA Science Inventory

    Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...

  9. Modeling Convection

    ERIC Educational Resources Information Center

    Ebert, James R.; Elliott, Nancy A.; Hurteau, Laura; Schulz, Amanda

    2004-01-01

    Students must understand the fundamental process of convection before they can grasp a wide variety of Earth processes, many of which may seem abstract because of the scales on which they operate. Presentation of a very visual, concrete model prior to instruction on these topics may facilitate students' understanding of processes that are largely…

  10. Painting models

    NASA Astrophysics Data System (ADS)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  11. Entrepreneurship Models.

    ERIC Educational Resources Information Center

    Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.

    This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…

  12. Why model?

    PubMed

    Wolkenhauer, Olaf

    2014-01-01

    Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?"

  13. Modeling Muscles

    ERIC Educational Resources Information Center

    Goodwyn, Lauren; Salm, Sarah

    2007-01-01

    Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…

  14. Atmospheric Modeling

    EPA Science Inventory

    Although air quality models have been applied historically to address issues specific to ambient air quality standards (i.e., one criteria pollutant at a time) or welfare (e.g.. acid deposition or visibility impairment). they are inherently multipollutant based. Therefore. in pri...

  15. Criticality Model

    SciTech Connect

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  16. Modeling Molecules

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The molecule modeling method known as Multibody Order (N) Dynamics, or MBO(N)D, was developed by Moldyn, Inc. at Goddard Space Flight Center through funding provided by the SBIR program. The software can model the dynamics of molecules through technology which stimulates low-frequency molecular motions and properties, such as movements among a molecule's constituent parts. With MBO(N)D, a molecule is substructured into a set of interconnected rigid and flexible bodies. These bodies replace the computation burden of mapping individual atoms. Moldyn's technology cuts computation time while increasing accuracy. The MBO(N)D technology is available as Insight II 97.0 from Molecular Simulations, Inc. Currently the technology is used to account for forces on spacecraft parts and to perform molecular analyses for pharmaceutical purposes. It permits the solution of molecular dynamics problems on a moderate workstation, as opposed to on a supercomputer.

  17. Combustor Modelling

    DTIC Science & Technology

    1980-02-01

    three-dimensional gas turbine combustion chamber flows. ASME 23rd Int. Gas Turbine Conference. London (1978) 10). R.S. Reynolds, T.E. Kuhn & H.C. Mongia ... combustion data. 2 4 The concept of this global model approach is to trantform the TJI region in a physically compatible way that can be computationally...is also addressed.- ee a ot s um FUEL EFFECTS ON GAS TURBINE COMBUSTION : Fuel characteristics which are most likely to, affect the design- of future

  18. Nuclear Models

    NASA Astrophysics Data System (ADS)

    Fossión, Rubén

    2010-09-01

    The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction). Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.

  19. Model checking

    NASA Technical Reports Server (NTRS)

    Dill, David L.

    1995-01-01

    Automatic formal verification methods for finite-state systems, also known as model-checking, successfully reduce labor costs since they are mostly automatic. Model checkers explicitly or implicitly enumerate the reachable state space of a system, whose behavior is described implicitly, perhaps by a program or a collection of finite automata. Simple properties, such as mutual exclusion or absence of deadlock, can be checked by inspecting individual states. More complex properties, such as lack of starvation, require search for cycles in the state graph with particular properties. Specifications to be checked may consist of built-in properties, such as deadlock or 'unspecified receptions' of messages, another program or implicit description, to be compared with a simulation, bisimulation, or language inclusion relation, or an assertion in one of several temporal logics. Finite-state verification tools are beginning to have a significant impact in commercial designs. There are many success stories of verification tools finding bugs in protocols or hardware controllers. In some cases, these tools have been incorporated into design methodology. Research in finite-state verification has been advancing rapidly, and is showing no signs of slowing down. Recent results include probabilistic algorithms for verification, exploitation of symmetry and independent events, and the use symbolic representations for Boolean functions and systems of linear inequalities. One of the most exciting areas for further research is the combination of model-checking with theorem-proving methods.

  20. Modeling biomembranes.

    SciTech Connect

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  1. Molecular Modeling

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.

    1999-06-01

    Molecular modeling has trickled down from the realm of pharmaceutical and research laboratories into the realm of undergraduate chemistry instruction. It has opened avenues for the visualization of chemical concepts that previously were difficult or impossible to convey. I am sure that many of you have developed exercises using the various molecular modeling tools. It is the desire of this Journal to become an avenue for you to share these exercises among your colleagues. It is to this end that Ron Starkey has agreed to edit such a column and to publish not only the description of such exercises, but also the software documents they use. The WWW is the obvious medium to distribute this combination and so accepted submissions will appear online as a feature of JCE Internet. Typical molecular modeling exercise: finding conformation energies. Molecular Modeling Exercises and Experiments is the latest feature column of JCE Internet, joining Conceptual Questions and Challenge Problems, Hal's Picks, and Mathcad in the Chemistry Curriculum. JCE Internet continues to seek submissions in these areas of interest and submissions of general interest. If you have developed materials and would like to submit them, please see our Guide to Submissions for more information. The Chemical Education Resource Shelf, Equipment Buyers Guide, and WWW Site Review would also like to hear about chemistry textbooks and software, equipment, and WWW sites, respectively. Please consult JCE Internet Features to learn more about these resources at JCE Online. Email Announcements Would you like to be informed by email when the latest issue of the Journal is available online? when a new JCE Software title is shipping? when a new JCE Internet article has been published or is available for Open Review? when your subscription is about to expire? A new feature of JCE Online makes this possible. Visit our Guestbook to learn how. When

  2. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  3. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  4. Biomimetic modelling.

    PubMed Central

    Vincent, Julian F V

    2003-01-01

    Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more complete and certain understanding and the possibility of further revelations for application in engineering. This is a pathway as yet unformalized, and one that offers the possibility that engineers can also be scientists. PMID:14561351

  5. Remote Sensing of Aerosol in the Terrestrial Atmosphere from Space: New Missions

    NASA Technical Reports Server (NTRS)

    Milinevsky, G.; Yatskiv, Ya.; Degtyaryov, O.; Syniavskyi, I.; Ivanov, Yu.; Bovchaliuk, A.; Mishchenko, M.; Danylevsky, V.; Sosonkin, M.; Bovchaliuk, V.

    2015-01-01

    The distribution and properties of atmospheric aerosols on a global scale are not well known in terms of determination of their effects on climate. This mostly is due to extreme variability of aerosol concentrations, properties, sources, and types. Aerosol climate impact is comparable to the effect of greenhouse gases, but its influence is more difficult to measure, especially with respect to aerosol microphysical properties and the evaluation of anthropogenic aerosol effect. There are many satellite missions studying aerosol distribution in the terrestrial atmosphere, such as MISR/Terra, OMI/Aura, AVHHR, MODIS/Terra and Aqua, CALIOP/CALIPSO. To improve the quality of data and climate models, and to reduce aerosol climate forcing uncertainties, several new missions are planned. The gap in orbital instruments for studying aerosol microphysics has arisen after the Glory mission failed during launch in 2011. In this review paper, we describe several planned aerosol space missions, including the Ukrainian project Aerosol-UA that obtains data using a multi-channel scanning polarimeter and wide-angle polarimetric camera. The project is designed for remote sensing of the aerosol microphysics and cloud properties on a global scale.

  6. Remote sensing of aerosol in the terrestrial atmosphere from space: "AEROSOL-UA" mission

    NASA Astrophysics Data System (ADS)

    Yatskiv, Yaroslav; Milinevsky, Gennadi; Degtyarev, Alexander

    2016-07-01

    The distribution and properties of atmospheric aerosols on a global scale are not well known in terms of determination of their effects on climate. This mostly is due to extreme variability of aerosol concentrations, properties, sources, and types. Aerosol climate impact is comparable to the effect of greenhouse gases, but its influence is more difficult to measure, especially with respect to aerosol microphysical properties and the evaluation of anthropogenic aerosol effect. There are many satellite missions studying aerosol distribution in the terrestrial atmosphere, such as MISR/Terra, OMI/Aura, AVHHR, MODIS/Terra and Aqua, CALIOP/CALIPSO. To improve the quality of data and climate models, and to reduce aerosol climate forcing uncertainties, several new missions are planned. The gap in orbital instruments for studying aerosol microphysics has arisen after the Glory mission failed during launch in 2011. In this review paper, we describe several planned aerosol space missions, including the Ukrainian project AEROSOL-UA that will obtain the data using a multi-channel scanning polarimeter and wide-angle polarimetric camera. The mission is designed for remote sensing of the aerosol microphysics and cloud properties on a global scale.

  7. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  8. Modeling metrology for calibration of OPC models

    NASA Astrophysics Data System (ADS)

    Mack, Chris A.; Raghunathan, Ananthan; Sturtevant, John; Deng, Yunfei; Zuniga, Christian; Adam, Kostas

    2016-03-01

    Optical Proximity Correction (OPC) has continually improved in accuracy over the years by adding more physically based models. Here, we further extend OPC modeling by adding the Analytical Linescan Model (ALM) to account for systematic biases in CD-SEM metrology. The ALM was added to a conventional OPC model calibration flow and the accuracy of the calibrated model with the ALM was compared to the standard model without the ALM using validation data. Without using any adjustable parameters in the ALM, OPC validation accuracy was improved by 5%. While very preliminary, these results give hope that modeling metrology could be an important next step in OPC model improvement.

  9. Using Aerocom Results to Constrain Black Carbon, Sulphate and Total Direct Aerosol Radiative Forcing and Their Uncertainties

    NASA Astrophysics Data System (ADS)

    Samset, B. H.; Myhre, G.

    2014-12-01

    Aerosols affect the global radiative balance, and hence the climate, through a multitude of processes. However, even the direct interaction of aerosols with incoming sunlight is at present insufficiently constrained. Here we compare the output of 15 recent aerosol climate models (AeroCom Phase II), both column averaged and vertically resolved. Through a simple MonteCarlo approach, we show that the model based total anthropogenic aerosol direct radiative forcing (DRF) uncertainty may be underestimated. Constraining modelled vertical profiles of black carbon (BC) concentration to aircraft measurements in remote regions, we further show that recent BC DRF estimates may be biased high. A short modelled BC lifetime is indicated as a necessary, though not sufficient, requirement for reproducing measurements. Finally, modeled sulphate aerosol DRF is discussed in the context of differences in representation of humidity and hygroscopic growth in the models.

  10. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  11. I&C Modeling in SPAR Models

    SciTech Connect

    John A. Schroeder

    2012-06-01

    The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.

  12. Engineering Structurally Configurable Models with Model Transformation

    DTIC Science & Technology

    2008-12-15

    model in the case of Simulink, and a dataflow model in the case of LabVIEW). Research modeling tools such as Ptolemy II [14], ForSyDe [21], SPEX [30...functionality of our model transformation tool built in the Ptolemy II framework, and its application to large models of distributed and parallel embedded...in Ptolemy II, the same idea can be applied to other modeling tools such as Simulink, LabVIEW, ForSyDe, SPEX and ModHel’X. Moreover, the recent OMG

  13. Final Report for Cloud-Aerosol Physics in Super-Parameterized Atmospheric Regional Climate Simulations (CAP-SPARCS)(DE-SC0002003) for 8/15/2009 through 8/14/2012

    SciTech Connect

    Russell, Lynn M; Somerville, Richard C.J.

    2012-11-05

    Improving the representation of local and non-local aerosol interactions in state-of-the-science regional climate models is a priority for the coming decade (Zhang, 2008). With this aim in mind, we have combined two new technologies that have a useful synergy: (1) an aerosol-enabled regional climate model (Advanced Weather Research and Forecasting Model with Chemistry WRF-Chem), whose primary weakness is a lack of high quality boundary conditions and (2) an aerosol-enabled multiscale modeling framework (PNNL Multiscale Aerosol Climate Model (MACM)), which is global but captures aerosol-convection-cloud feedbacks, and thus an ideal source of boundary conditions. Combining these two approaches has resulted in an aerosol-enabled modeling framework that not only resolves high resolution details in a particular region, but crucially does so within a global context that is similarly faithful to multi-scale aerosol-climate interactions. We have applied and improved the representation of aerosol interactions by evaluating model performance over multiple domains, with (1) an extensive evaluation of mid-continent precipitation representation by multiscale modeling, (2) two focused comparisons to transport of aerosol plumes to the eastern United States for comparison with observations made as part of the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT), with the first being idealized and the second being linked to an extensive wildfire plume, and (3) the extension of these ideas to the development of a new approach to evaluating aerosol indirect effects with limited-duration model runs by nudging to observations. This research supported the work of one postdoc (Zhan Zhao) for two years and contributed to the training and research of two graduate students. Four peer-reviewed publications have resulted from this work, and ground work for a follow-on project was completed.

  14. Modeling cholera outbreaks

    PubMed Central

    Longini, Ira M.; Morris, J. Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios. PMID:23412687

  15. Air Quality Dispersion Modeling - Alternative Models

    EPA Pesticide Factsheets

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  16. Uncertainty Modeling Via Frequency Domain Model Validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Andrisani, Dominick, II

    1999-01-01

    Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.

  17. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  18. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  19. Bohr model as an algebraic collective model

    SciTech Connect

    Rowe, D. J.; Welsh, T. A.; Caprio, M. A.

    2009-05-15

    Developments and applications are presented of an algebraic version of Bohr's collective model. Illustrative examples show that fully converged calculations can be performed quickly and easily for a large range of Hamiltonians. As a result, the Bohr model becomes an effective tool in the analysis of experimental data. The examples are chosen both to confirm the reliability of the algebraic collective model and to show the diversity of results that can be obtained by its use. The focus of the paper is to facilitate identification of the limitations of the Bohr model with a view to developing more realistic, computationally tractable models.

  20. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool.

  1. Geologic Framework Model Analysis Model Report

    SciTech Connect

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  2. Photochemical Modeling Applications

    EPA Pesticide Factsheets

    Provides access to modeling applications involving photochemical models, including modeling of ozone, particulate matter (PM), and mercury for national and regional EPA regulations such as the Clean Air Interstate Rule (CAIR) and the Clean Air Mercury Rule

  3. Educating with Aircraft Models

    ERIC Educational Resources Information Center

    Steele, Hobie

    1976-01-01

    Described is utilization of aircraft models, model aircraft clubs, and model aircraft magazines to promote student interest in aerospace education. The addresses for clubs and magazines are included. (SL)

  4. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  5. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  6. The impact of volcanic aerosol on the Northern Hemisphere stratospheric polar vortex: mechanisms and sensitivity to forcing structure

    NASA Astrophysics Data System (ADS)

    Toohey, M.; Krüger, K.; Bittner, M.; Timmreck, C.; Schmidt, H.

    2014-06-01

    Observations and simple theoretical arguments suggest that the Northern Hemisphere (NH) stratospheric polar vortex is stronger in winters following major volcanic eruptions. However, recent studies show that climate models forced by prescribed volcanic aerosol fields fail to reproduce this effect. We investigate the impact of volcanic aerosol forcing on stratospheric dynamics, including the strength of the NH polar vortex, in ensemble simulations with the Max Planck Institute Earth System Model. The model is forced by four different prescribed forcing sets representing the radiative properties of stratospheric aerosol following the 1991 eruption of Mt. Pinatubo: two forcing sets are based on observations, and are commonly used in climate model simulations, and two forcing sets are constructed based on coupled aerosol-climate model simulations. For all forcings, we find that temperature and zonal wind anomalies in the NH high latitudes are not directly impacted by anomalous volcanic aerosol heating. Instead, high latitude effects result from robust enhancements in stratospheric residual circulation, which in turn result, at least in part, from enhanced stratospheric wave activity. High latitude effects are therefore much less robust than would be expected if they were the direct result of aerosol heating. While there is significant ensemble variability in the high latitude response to each aerosol forcing set, the mean response is sensitive to the forcing set used. Significant differences, for example, are found in the NH polar stratosphere temperature and zonal wind response to two different forcing data sets constructed from different versions of SAGE II aerosol observations. Significant strengthening of the polar vortex, in rough agreement with the expected response, is achieved only using aerosol forcing extracted from prior coupled aerosol-climate model simulations. Differences in the dynamical response to the different forcing sets used imply that reproducing

  7. Regularized Structural Equation Modeling.

    PubMed

    Jacobucci, Ross; Grimm, Kevin J; McArdle, John J

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.

  8. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    SciTech Connect

    Clinton Lum

    2002-02-04

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  9. Model Reduction of Viscoelastic Finite Element Models

    NASA Astrophysics Data System (ADS)

    Park, C. H.; Inman, D. J.; Lam, M. J.

    1999-01-01

    This paper examines a method of adding viscoelastic properties to finite element models by using additional co-ordinates to account for the frequency dependence usually associated with such damping materials. Several such methods exist and all suffer from an increase in order of the final finite model which is undesirable in many applications. Here we propose to combine one of these methods, the GHM (Golla-Hughes-McTavish) method, with model reduction techniques to remove the objection of increased model order. The result of combining several methods is an ability to add the effects of visoelastic components to finite element or other analytical models without increasing the order of the system. The procedure is illustrated by a numerical example. The method proposed here results in a viscoelastic finite element of a structure without increasing the order of the original model.

  10. Neurometric Modeling: Computational Modeling of Individual Brains

    DTIC Science & Technology

    2011-05-16

    Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Neural networks, computational neuroscience, fMRI ...obtained using functional MRI. Algorithmic processing of these measurements can exploit a variety of statistical machine learning methods to... statistical machine learning methods to synthesize a new kind of neuro-cognitive model, which we call neurometric models. These executable models could be

  11. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  12. Biosphere Model Report

    SciTech Connect

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  13. Biosphere Model Report

    SciTech Connect

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  14. Qualitative Student Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model is used as the focus of this review of qualitative student models in order to compare alternative computational models and to contrast domain requirements. The report is divided into eight sections: (1) Origins and Goals (adaptive instruction, qualitative models of processes, components of an artificial…

  15. Models for Ammunition Management

    DTIC Science & Technology

    1977-08-01

    Analysis Operations Research Management Models Mobilization Planning Computer Programming Ammunition Management Economic Analysis Production Planning...ammunition managers on a unique set of nine modern computer models specifically developed to support the conventional ammunition management decision...DECISION MODELS DIRECTORATE ROCK ISLAND, ILLINOIS 61201 r ABSTRACT This special management report presents a unique set of nine computer models

  16. Generative Models of Disfluency

    ERIC Educational Resources Information Center

    Miller, Timothy A.

    2010-01-01

    This thesis describes a generative model for representing disfluent phenomena in human speech. This model makes use of observed syntactic structure present in disfluent speech, and uses a right-corner transform on syntax trees to model this structure in a very natural way. Specifically, the phenomenon of speech repair is modeled by explicitly…

  17. Multimodeling and Model Abstraction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...

  18. Biomass Scenario Model

    SciTech Connect

    2015-09-01

    The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.

  19. The Instrumental Model

    ERIC Educational Resources Information Center

    Yeates, Devin Rodney

    2011-01-01

    The goal of this dissertation is to enable better predictive models by engaging raw experimental data through the Instrumental Model. The Instrumental Model captures the protocols and procedures of experimental data analysis. The approach is formalized by encoding the Instrumental Model in an XML record. Decoupling the raw experimental data from…

  20. AIDS Epidemiological models

    NASA Astrophysics Data System (ADS)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  1. Impact of anthropogenic aerosols on Indian summer monsoon

    SciTech Connect

    Wang, Chien; Kim, Dongchul; Ekman, Annica; Barth, Mary; Rasch, Philip J.

    2009-11-05

    Using an interactive aerosol-climate model we find that absorbing anthropogenic aerosols, whether coexisting with scattering aerosols or not, can significantly affect the Indian summer monsoon system. We also show that the influence is reflected in a perturbation to the moist static energy in the sub-cloud layer, initiated as a heating by absorbing aerosols to the planetary boundary layer. The perturbation appears mostly over land, extending from just north of the Arabian Sea to northern India along the southern slope of the Tibetan Plateau. As a result, during the summer monsoon season, modeled convective precipitation experiences a clear northward shift, coincidently in agreement with observed monsoon precipitation changes in recent decades particularly during the onset season. We demonstrate that the sub-cloud layer moist static energy is a useful quantity for determining the impact of aerosols on the northward extent and to a certain degree the strength of monsoon convection.

  2. Adsorptive uptake of water by semisolid secondary organic aerosols

    NASA Astrophysics Data System (ADS)

    Pajunoja, Aki; Lambe, Andrew T.; Hakala, Jani; Rastak, Narges; Cummings, Molly J.; Brogan, James F.; Hao, Liqing; Paramonov, Mikhail; Hong, Juan; Prisle, Nønne L.; Malila, Jussi; Romakkaniemi, Sami; Lehtinen, Kari E. J.; Laaksonen, Ari; Kulmala, Markku; Massoli, Paola; Onasch, Timothy B.; Donahue, Neil M.; Riipinen, Ilona; Davidovits, Paul; Worsnop, Douglas R.; Petäjä, Tuukka; Virtanen, Annele

    2015-04-01

    Aerosol climate effects are intimately tied to interactions with water. Here we combine hygroscopicity measurements with direct observations about the phase of secondary organic aerosol (SOA) particles to show that water uptake by slightly oxygenated SOA is an adsorption-dominated process under subsaturated conditions, where low solubility inhibits water uptake until the humidity is high enough for dissolution to occur. This reconciles reported discrepancies in previous hygroscopicity closure studies. We demonstrate that the difference in SOA hygroscopic behavior in subsaturated and supersaturated conditions can lead to an effect up to about 30% in the direct aerosol forcing—highlighting the need to implement correct descriptions of these processes in atmospheric models. Obtaining closure across the water saturation point is therefore a critical issue for accurate climate modeling.

  3. Talk about toy models

    NASA Astrophysics Data System (ADS)

    Luczak, Joshua

    2017-02-01

    Scientific models are frequently discussed in philosophy of science. A great deal of the discussion is centred on approximation, idealisation, and on how these models achieve their representational function. Despite the importance, distinct nature, and high presence of toy models, they have received little attention from philosophers. This paper hopes to remedy this situation. It aims to elevate the status of toy models: by distinguishing them from approximations and idealisations, by highlighting and elaborating on several ways the Kac ring, a simple statistical mechanical model, is used as a toy model, and by explaining why toy models can be used to successfully carry out important work without performing a representational function.

  4. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-02-16

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  5. Knowledge and information modeling.

    PubMed

    Madsen, Maria

    2010-01-01

    This chapter gives an educational overview of: * commonly used modelling methods what they represent * the importance of selecting the tools and methods suited to the health information system being designed * how the quality of the information or knowledge model is determined by the quality of the system requirements specification * differentiating between the purpose of information models and knowledge models * the benefits of the openEHR approach for health care data modeling.

  6. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  7. Stable models of superacceleration

    SciTech Connect

    Kaplinghat, Manoj; Rajaraman, Arvind

    2007-05-15

    We discuss an instability in a large class of models where dark energy is coupled to matter. In these models the mass of the scalar field is much larger than the expansion rate of the Universe. We find models in which this instability is absent, and show that these models generically predict an apparent equation of state for dark energy smaller than -1, i.e., superacceleration. These models have no acausal behavior or ghosts.

  8. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  9. Model Shrinkage for Discriminative Language Models

    NASA Astrophysics Data System (ADS)

    Oba, Takanobu; Hori, Takaaki; Nakamura, Atsushi; Ito, Akinori

    This paper describes a technique for overcoming the model shrinkage problem in automatic speech recognition (ASR), which allows application developers and users to control the model size with less degradation of accuracy. Recently, models for ASR systems tend to be large and this can constitute a bottleneck for developers and users without special knowledge of ASR with respect to introducing the ASR function. Specifically, discriminative language models (DLMs) are usually designed in a high-dimensional parameter space, although DLMs have gained increasing attention as an approach for improving recognition accuracy. Our proposed method can be applied to linear models including DLMs, in which the score of an input sample is given by the inner product of its features and the model parameters, but our proposed method can shrink models in an easy computation by obtaining simple statistics, which are square sums of feature values appearing in a data set. Our experimental results show that our proposed method can shrink a DLM with little degradation in accuracy and perform properly whether or not the data for obtaining the statistics are the same as the data for training the model.

  10. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  11. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  12. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  13. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  14. Multiple model inference.

    SciTech Connect

    Swiler, Laura Painton; Urbina, Angel

    2010-07-01

    This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.

  15. Modeling nonstationary longitudinal data.

    PubMed

    Núñez-Antón, V; Zimmerman, D L

    2000-09-01

    An important theme of longitudinal data analysis in the past two decades has been the development and use of explicit parametric models for the data's variance-covariance structure. A variety of these models have been proposed, of which most are second-order stationary. A few are flexible enough to accommodate nonstationarity, i.e., nonconstant variances and/or correlations that are not a function solely of elapsed time between measurements. We review five nonstationary models that we regard as most useful: (1) the unstructured covariance model, (2) unstructured antedependence models, (3) structured antedependence models, (4) autoregressive integrated moving average and similar models, and (5) random coefficients models. We evaluate the relative strengths and limitations of each model, emphasizing when it is inappropriate or unlikely to be useful. We present three examples to illustrate the fitting and comparison of the models and to demonstrate that nonstationary longitudinal data can be modeled effectively and, in some cases, quite parsimoniously. In these examples, the antedependence models generally prove to be superior and the random coefficients models prove to be inferior. We conclude that antedependence models should be given much greater consideration than they have historically received.

  16. Modeling volatility using state space models.

    PubMed

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  17. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  18. Energy-consumption modelling

    SciTech Connect

    Reiter, E.R.

    1980-01-01

    A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.

  19. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  20. A future of the model organism model

    PubMed Central

    Rine, Jasper

    2014-01-01

    Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. However, researchers must take special care and implement new resources to enable the newest members of the community to engage fully with the remarkable legacy of information in these fields. PMID:24577733

  1. Biosphere Model Report

    SciTech Connect

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  2. Develop a Model Component

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a

  3. Aerosol Modeling for the Global Model Initiative

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.

    2001-01-01

    The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

  4. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  5. Aggregation in ecosystem models and model stability

    NASA Astrophysics Data System (ADS)

    Giricheva, Evgeniya

    2015-05-01

    Using a multimodal approach to research ecosystems improves usage of available information on an object. This study presents several models of the Bering Sea ecosystem. The ecosystem is considered as a closed object, that is, the influence of the environment is not provided. We then add the links with the external medium in the models. The models differ in terms of the degree and method of grouping components. Our method is based on the differences in habitat and food source of groups, which allows us to determine the grouping of species with a greater effect on system dynamics. In particular, we determine whether benthic fish aggregation or pelagic fish aggregation can change the consumption structure of some groups of species, and consequently, the behavior of the entire model system.

  6. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  7. Of Molecules and Models.

    ERIC Educational Resources Information Center

    Brinner, Bonnie

    1992-01-01

    Presents an activity in which models help students visualize both the DNA process and transcription. After constructing DNA, RNA messenger, and RNA transfer molecules; students model cells, protein synthesis, codons, and RNA movement. (MDH)

  8. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  9. Mass modeling for bars

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.

    1987-01-01

    Methods of modeling mass for bars are surveyed. A method for extending John Archer's concept of consistent mass beyond just translational inertia effects is included. Recommendations are given for various types of modeling situations.

  10. Green Infrastructure Modeling Toolkit

    EPA Pesticide Factsheets

    EPA's Green Infrastructure Modeling Toolkit is a toolkit of 5 EPA green infrastructure models and tools, along with communication materials, that can be used as a teaching tool and a quick reference resource when making GI implementation decisions.

  11. X-33 RCS model

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Model support system and instumentation cabling of the 1% scale X-33 reaction control system model. Installed in the Unitary Plan Wind Tunnel for supersonic testing. In building 1251, test section #2.

  12. SEDIMENT GEOCHEMICAL MODEL

    EPA Science Inventory

    Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...

  13. Modeling EERE deployment programs

    SciTech Connect

    Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.

    2007-11-01

    The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.

  14. The Model Builders

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This video explores the world of modeling at the NASA Johnson Space Center. Artisans create models, large and small, to help scientists and engineers make final design modifications before building more costly prototypes.

  15. Model comparison in ANOVA.

    PubMed

    Rouder, Jeffrey N; Engelhardt, Christopher R; McCabe, Simon; Morey, Richard D

    2016-12-01

    Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of F-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA F-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.

  16. Protein solubility modeling

    NASA Technical Reports Server (NTRS)

    Agena, S. M.; Pusey, M. L.; Bogle, I. D.

    1999-01-01

    A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

  17. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  18. Ginocchio model with isospin

    NASA Astrophysics Data System (ADS)

    Okai, Tadashi; Otsuka, Takaharu; Arima, Akito

    1992-02-01

    We study the sp(8) subgroup of the isospin-invariant Ginnocchio model. The allowed quantum numbers are determined in terms of Young's diagrams. Using this result, we discuss the excitation energy of a model hamiltonian.

  19. Modeling DNA Replication.

    ERIC Educational Resources Information Center

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  20. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  1. Modelling pelagic biogeography

    NASA Astrophysics Data System (ADS)

    Olson, Donald B.; Hood, Raleigh R.

    Various combinations of physical and biological models are used to explore factors that determine the distribution of organisms in the world's oceans. The physical models examined include simple box models with parameterized inter-box exchanges that take into account variable box geometries, and specified continuous flows either in the Eulerian frame as stream-functions or as Lagrangian trajectories. A 1-dimensional mixed-layer model and a primitive equation channel model are introduced as examples of dynamical models depicting ocean physics. Biological models are discussed starting with a simple nitrogen (N), phytoplankton (P), zooplankton (Z) and detritus (D), NPZD formulation. The equilibria of this model is explored analytically as an example of computing steady state solutions, and then considering where in parameter space extinction occurs. Nonlinearities and expansion of NPZD to multi-species models are also treated. This is followed by the introduction of a nonlinear three-component food chain model, multi-species Lotka-Voltera competition models, and finally a discussion of structured population models including a derivation of a genetics model written in terms of genotypes. The physical models are then coupled with the biological ones in a series of examples. Both the box model with Lotka-Voltera multi-species population dynamics, and the 1-dimensional mixed-layer model with NPZD are used to demonstrate how the existence of spatial and temporal niches can allow a large number of species to coexist within biogeographic domains even though conditions at most sites and times are not conducive to supporting such diversity. These models recreate the basic diversity patterns observed in the pelagic ecosystem at various latitudes. The box model simulations also demonstrate the tendency for diffusive models to overestimate the dispersion of a species. In order to explore the dynamics of the edges of biogeographic domains a three species food chain model is

  2. Hierarchical Bass model

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  3. Methodology for Stochastic Modeling.

    DTIC Science & Technology

    1985-01-01

    AD-AISS 851 METHODOLOGY FOR STOCHASTIC MODELING(U) ARMY MATERIEL 11 SYSTEMS ANALYSIS ACTIYITY ABERDEEN PROVING GROUND MD H E COHEN JAN 95 RNSAA-TR-41...FORM T REPORT NUMBER 2. GOVT ACCESSION NO. 3. RECIPIENT’$ CATALOG NUMBER 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Methodology for...autoregression models, moving average models, ARMA, adaptive modeling, covariance methods , singular value decom- position, order determination rational

  4. Reinforced Concrete Modeling

    DTIC Science & Technology

    1982-07-01

    AFWL-TR-82-9 AFWL-TR-82-9 REINFORCED CONCRETE MODELING H. L. Schreyer J. W. Jeter, Jr. New Mexico Engineering Reseprch Institute University of New...Subtitle) S. TYPE OF REPORT & PERIOD COVERED REINFORCED CONCRETE MODELING Final Report 6. PERFORMING OtG. REPORT NUMBER NMERI TA8-9 7. AUTHORg) S...loading were identified and used to evaluate current concrete models . Since the endochronic and viscoplastic models provide satisfactory descriptions

  5. Atmospheric density models

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.

  6. Soil moisture modeling review

    NASA Technical Reports Server (NTRS)

    Hildreth, W. W.

    1978-01-01

    A determination of the state of the art in soil moisture transport modeling based on physical or physiological principles was made. It was found that soil moisture models based on physical principles have been under development for more than 10 years. However, these models were shown to represent infiltration and redistribution of soil moisture quite well. Evapotranspiration has not been as adequately incorporated into the models.

  7. Model Engineering using Multimodeling

    DTIC Science & Technology

    2008-04-16

    given as a Statecharts model, and interprets it as a hierarchical multimodel. We then show an equivalent model constructed with Ptolemy II [13] that...That work followed on Ptolemy Classic [9], which provided a software architecture supporting a general form of hierarchical multimodeling. In [9...Colif [10]. This approach does not segregate distinct models of computation hierarchically. Ptolemy Classic [9] also illustrated multi-view modeling

  8. Mathematical circulatory system model

    NASA Technical Reports Server (NTRS)

    Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)

    2010-01-01

    A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.

  9. Future of groundwater modeling

    USGS Publications Warehouse

    Langevin, Christian D.; Panday, Sorab

    2012-01-01

    With an increasing need to better manage water resources, the future of groundwater modeling is bright and exciting. However, while the past can be described and the present is known, the future of groundwater modeling, just like a groundwater model result, is highly uncertain and any prediction is probably not going to be entirely representative. Thus we acknowledge this as we present our vision of where groundwater modeling may be headed.

  10. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  11. Rock Properties Model

    SciTech Connect

    C. Lum

    2004-09-16

    The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.

  12. General Graded Response Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    This paper describes the graded response model. The graded response model represents a family of mathematical models that deal with ordered polytomous categories, such as: (1) letter grading; (2) an attitude survey with "strongly disagree, disagree, agree, and strongly agree" choices; (3) partial credit given in accord with an…

  13. Models, Norms and Sharing.

    ERIC Educational Resources Information Center

    Harris, Mary B.

    To investigate the effect of modeling on altruism, 156 third and fifth grade children were exposed to a model who either shared with them, gave to a charity, or refused to share. The test apparatus, identified as a game, consisted of a box with signal lights and a chute through which marbles were dispensed. Subjects and the model played the game…

  14. Designing cyclic universe models.

    PubMed

    Khoury, Justin; Steinhardt, Paul J; Turok, Neil

    2004-01-23

    The phenomenological constraints on the scalar field potential in cyclic models of the Universe are presented. We show that cyclic models require a comparable degree of tuning to that needed for inflationary models. The constraints are reduced to a set of simple design rules including "fast-roll" parameters analogous to the "slow-roll" parameters in inflation.

  15. Modelling Vocabulary Loss

    ERIC Educational Resources Information Center

    Meara, Paul

    2004-01-01

    This paper describes some simple simulation models of vocabulary attrition. The attrition process is modelled using a random autonomous Boolean network model, and some parallels with real attrition data are drawn. The paper argues that applying a complex systems approach to attrition can provide some important insights, which suggest that real…

  16. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  17. A Model Performance

    ERIC Educational Resources Information Center

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  18. Modeling EERE Deployment Programs

    SciTech Connect

    Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.

    2007-11-01

    This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.

  19. Modeling Applications and Tools

    EPA Pesticide Factsheets

    The U.S. EPA's Air Quality Modeling Group (AQMG) conducts modeling analyses to support policy and regulatory decisions in OAR and provides leadership and direction on the full range of air quality models and other mathematical simulation techniques used in

  20. Model Rockets and Microchips.

    ERIC Educational Resources Information Center

    Fitzsimmons, Charles P.

    1986-01-01

    Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)

  1. IR DIAL performance modeling

    SciTech Connect

    Sharlemann, E.T.

    1994-07-01

    We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

  2. Modeling and Remodeling Writing

    ERIC Educational Resources Information Center

    Hayes, John R.

    2012-01-01

    In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…

  3. Campus Energy Modeling Platform

    SciTech Connect

    Sides, Scott; Kemper, Travis; Larsen, Ross; Graf, Peter

    2014-09-19

    NREL's Campus Energy Modeling project provides a suite of simulation tools for integrated, data driven energy modeling of commercial buildings and campuses using Simulink. The tools enable development of fully interconnected models for commercial campus energy infrastructure, including electrical distribution systems, district heating and cooling, onsite generation (both conventional and renewable), building loads, energy storage, and control systems.

  4. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  5. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  6. The Accreditation Plus Model.

    ERIC Educational Resources Information Center

    Ayers, Jerry B.; And Others

    1988-01-01

    The Accreditation Plus model developed by the Center for Teacher Education Evaluation of Tennessee Technological University (Cookeville) for evaluation of teacher education programs is described. An amalgamation of the accreditation model and use-tailored evaluation plans, the model calls for informed eclecticism in the assembly of evaluation…

  7. Improved analytic nutation model

    NASA Technical Reports Server (NTRS)

    Yoder, C. F.; Ivins, E. R.

    1988-01-01

    Models describing the earth's nutations are discussed. It is found that the simple model of Sasao et al., (1981) differs from Wahr's (1981) theory term by term by less than 0.3 marcsec if a modern earth structure model is used to evaluate the nutation structure constants. In addition, the effect of oceans is estimated.

  8. Modeling Climate Dynamically

    ERIC Educational Resources Information Center

    Walsh, Jim; McGehee, Richard

    2013-01-01

    A dynamical systems approach to energy balance models of climate is presented, focusing on low order, or conceptual, models. Included are global average and latitude-dependent, surface temperature models. The development and analysis of the differential equations and corresponding bifurcation diagrams provides a host of appropriate material for…

  9. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  10. Modern Media Education Models

    ERIC Educational Resources Information Center

    Fedorov, Alexander

    2011-01-01

    The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…

  11. Models and Indicators.

    ERIC Educational Resources Information Center

    Land, Kenneth C.

    2001-01-01

    Examines the definition, construction, and interpretation of social indicators. Shows how standard classes of formalisms used to construct models in contemporary sociology are derived from the general theory of models. Reviews recent model building and evaluation related to active life expectancy among the elderly, fertility rates, and indicators…

  12. Generalized Latent Trait Models.

    ERIC Educational Resources Information Center

    Moustaki, Irini; Knott, Martin

    2000-01-01

    Discusses a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. Presents a unified maximum likelihood method for estimating the parameters of the generalized latent trait model and discusses the scoring of individuals on the latent dimensions.…

  13. Molecular Models in Biology

    ERIC Educational Resources Information Center

    Goodman, Richard E.

    1970-01-01

    Describes types of molecular models (ball-and-stick, framework, and space-filling) and evaluates commercially available kits. Gives instructions for constructive models from polystyrene balls and pipe-cleaners. Models are useful for class demonstrations although not sufficiently accurate for research use. Illustrations show biologically important…

  14. A Model Chemistry Class.

    ERIC Educational Resources Information Center

    Summerlin, Lee; Borgford, Christie

    1989-01-01

    Described is an activity which uses a 96-well reaction plate and soda straws to construct a model of the periodic table of the elements. The model illustrates the ionization energies of the various elements. Construction of the model and related concepts are discussed. (CW)

  15. Models for Products

    ERIC Educational Resources Information Center

    Speiser, Bob; Walter, Chuck

    2011-01-01

    This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…

  16. Concluding remarks: challenges for aerosols and climate.

    PubMed

    Murphy, D M

    2013-01-01

    We study aerosols for many reasons, including their effects on human health and climate. For climate, it is important to distinguish between the overall radiative effect of aerosols and the radiative forcing, which has been the anthropogenic change (after rapid atmospheric adjustments) since pre-industrial times. The radiative forcing is in principle much harder to observe than the overall effect because one must understand which particles are natural in today's atmosphere and what aerosols were like in the atmosphere before large-scale human influence. Because we cannot go back and measure the past, the only way to calculate radiative forcing may often require modeling detailed aerosol processes. This is a motivation for many of the processes studied at the Faraday Discussion 165. Other processes may need more attention by the aerosol climate community.

  17. The Spatial and Temporal Heterogeneity of Precipitation and Aerosol-Cloud Radiative Forcing Uncertainty in Climatically Important Regions

    NASA Astrophysics Data System (ADS)

    Regayre, L.; Pringle, K.; Lee, L.; Booth, B.; Browse, J.; Mann, G.; Woodhouse, M. T.; Reddington, C.; Carslaw, K. S.; Rap, A.

    2014-12-01

    Aerosol-cloud radiative forcing and precipitation sensitivities are quantified within climatically important regions, where surface temperatures and moisture availability are thought to influence large-scale climatic effects. The sensitivity of precipitation and the balance of incoming and outgoing radiation to uncertain historical aerosol emission fluxes and aerosol-cloud parametrisations are quantified and their climatic importance considered. The predictability of monsoon onset and intensity, position of the inter-tropical convergence zone, tropical storm frequency and intensity, heat transport to the Arctic and changes in the mode of the El Niño Southern Oscillation are all limited by the parametric uncertainties examined here. Precipitation and aerosol-cloud radiative forcing sensitivities are found to be both spatially and temporally heterogeneous. Statistical analysis highlights aspects of aerosol-climate research and model development that should be prioritised in order to reduce the impact of uncertainty in regional precipitation and aerosol-cloud forcing on near-term climate projections.

  18. Advances in Watershed Models and Modeling

    NASA Astrophysics Data System (ADS)

    Yeh, G. T.; Zhang, F.

    2015-12-01

    The development of watershed models and their applications to real-world problems has evolved significantly since 1960's. Watershed models can be classified based on what media are included, what processes are dealt with, and what approaches are taken. In term of media, a watershed may include segregated overland regime, river-canal-open channel networks, ponds-reservoirs-small lakes, and subsurface media. It may also include integrated media of all these or a partial set of these as well as man-made control structures. In term of processes, a watershed model may deal with coupled or decoupled hydrological and biogeochemical cycles. These processes include fluid flow, thermal transport, salinity transport, sediment transport, reactive transport, and biota and microbe kinetics. In terms of approaches, either parametric or physics-based approach can be taken. This talk discusses the evolution of watershed models in the past sixty years. The advances of watershed models center around their increasing design capability to foster these segregated or integrated media and coupled or decoupled processes. Widely used models developed by academia, research institutes, government agencies, and private industries will be reviewed in terms of the media and processes included as well as approaches taken. Many types of potential benchmark problems in general can be proposed and will be discussed. This presentation will focus on three benchmark problems of biogeochemical cycles. These three problems, dealing with water quality transport, will be formulated in terms of reactive transport. Simulation results will be illustrated using WASH123D, a watershed model developed and continuously updated by the author and his PhD graduates. Keywords: Hydrological Cycles, Biogeochemical Cycles, Biota Kinetics, Parametric Approach, Physics-based Approach, Reactive Transport.

  19. Transgenesis for pig models

    PubMed Central

    Yum, Soo-Young; Yoon, Ki-Young; Lee, Choong-Il; Lee, Byeong-Chun

    2016-01-01

    Animal models, particularly pigs, have come to play an important role in translational biomedical research. There have been many pig models with genetically modifications via somatic cell nuclear transfer (SCNT). However, because most transgenic pigs have been produced by random integration to date, the necessity for more exact gene-mutated models using recombinase based conditional gene expression like mice has been raised. Currently, advanced genome-editing technologies enable us to generate specific gene-deleted and -inserted pig models. In the future, the development of pig models with gene editing technologies could be a valuable resource for biomedical research. PMID:27030199

  20. UZ Colloid Transport Model

    SciTech Connect

    M. McGraw

    2000-04-13

    The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.

  1. Mathematical modeling in neuroendocrinology.

    PubMed

    Bertram, Richard

    2015-04-01

    Mathematical models are commonly used in neuroscience, both as tools for integrating data and as devices for designing new experiments that test model predictions. The wide range of relevant spatial and temporal scales in the neuroendocrine system makes neuroendocrinology a branch of neuroscience with great potential for modeling. This article provides an overview of concepts that are useful for understanding mathematical models of the neuroendocrine system, as well as design principles that have been illuminated through the use of mathematical models. These principles are found over and over again in cellular dynamics, and serve as building blocks for understanding some of the complex temporal dynamics that are exhibited throughout the neuroendocrine system.

  2. Models of Goldstone gauginos

    NASA Astrophysics Data System (ADS)

    Alves, Daniele S. M.; Galloway, Jamison; McCullough, Matthew; Weiner, Neal

    2016-04-01

    Models with Dirac gauginos are appealing scenarios for physics beyond the Standard Model. They have smaller radiative corrections to scalar soft masses, a suppression of certain supersymmetry (SUSY) production processes at the LHC, and ameliorated flavor constraints. Unfortunately, they are generically plagued by tachyons charged under the Standard Model, and attempts to eliminate such states typically spoil the positive features. The recently proposed "Goldstone gaugino" mechanism provides a simple realization of Dirac gauginos that is automatically free of dangerous tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply and discuss scenarios for unification.

  3. Mechanics of materials model

    NASA Technical Reports Server (NTRS)

    Meister, Jeffrey P.

    1987-01-01

    The Mechanics of Materials Model (MOMM) is a three-dimensional inelastic structural analysis code for use as an early design stage tool for hot section components. MOMM is a stiffness method finite element code that uses a network of beams to characterize component behavior. The MOMM contains three material models to account for inelastic material behavior. These include the simplified material model, which assumes a bilinear stress-strain response; the state-of-the-art model, which utilizes the classical elastic-plastic-creep strain decomposition; and Walker's viscoplastic model, which accounts for the interaction between creep and plasticity that occurs under cyclic loading conditions.

  4. Laser Range Camera Modeling

    SciTech Connect

    Storjohann, K.

    1990-01-01

    This paper describes an imaging model that was derived for use with a laser range camera (LRC) developed by the Advanced Intelligent Machines Division of Odetics. However, this model could be applied to any comparable imaging system. Both the derivation of the model and the determination of the LRC's intrinsic parameters are explained. For the purpose of evaluating the LRC's extrinsic parameters, i.e., its external orientation, a transformation of the LRC's imaging model into a standard camera's (SC) pinhole model is derived. By virtue of this transformation, the evaluation of the LRC's external orientation can be found by applying any SC calibration technique.

  5. The FREZCHEM Model

    NASA Astrophysics Data System (ADS)

    Marion, Giles M.; Kargel, Jeffrey S.

    Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.

  6. Models of the universe

    NASA Astrophysics Data System (ADS)

    Fischer, Arthur E.

    1996-01-01

    In this paper a theory of models of the universe is proposed. We refer to such models ascosmological models, where a cosmological model is defined as an Einstein-inextendible Einstein spacetime. A cosmological model isabsolute if it is a Lorentz-inextendible Einstein spacetime,predictive if it is globally hyperbolic, andnon-predictive if it is nonglobally-hyperbolic. We discuss several features of these models in the study of cosmology. As an example, any compact Einstein spacetime is always a non-predictive absolute cosmological model, whereas a noncompact complete Einstein spacetime is an absolute cosmological model which may be either predictive or non-predictive. We discuss the important role played by maximal Einstein spacetimes. In particular, we examine the possible proper Lorentz-extensions of such spacetimes, and show that a spatially compact maximal Einstein spacetime is exclusively either a predictive cosmological model or a proper sub-spacetime of a non-predictive cosmological model. Provided that the Strong Cosmic Censorship conjecture is true, a generic spatially compact maximal Einstein spacetime must be a predictive cosmological model. It isconjectured that the Strong Cosmic Censorship conjecture isnot true, and converting a vice to a virtue it is argued that the failure of the Strong Cosmic Censorship conjecture would point to what may be general relativity's greatest prediction of all, namely,that general relativity predicts that general relativity cannot predict the entire history of the universe.

  7. Modelling Farm Animal Welfare

    PubMed Central

    Collins, Lisa M.; Part, Chérie E.

    2013-01-01

    Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411

  8. Distributed fuzzy system modeling

    SciTech Connect

    Pedrycz, W.; Chi Fung Lam, P.; Rocha, A.F.

    1995-05-01

    The paper introduces and studies an idea of distributed modeling treating it as a new paradigm of fuzzy system modeling and analysis. This form of modeling is oriented towards developing individual (local) fuzzy models for specific modeling landmarks (expressed as fuzzy sets) and determining the essential logical relationships between these local models. The models themselves are implemented in the form of logic processors being regarded as specialized fuzzy neural networks. The interaction between the processors is developed either in an inhibitory or excitatory way. In more descriptive way, the distributed model can be sought as a collection of fuzzy finite state machines with their individual local first or higher order memories. It is also clarified how the concept of distributed modeling narrows down a gap between purely numerical (quantitative) models and the qualitative ones originated within the realm of Artificial Intelligence. The overall architecture of distributed modeling is discussed along with the detailed learning schemes. The results of extensive simulation experiments are provided as well. 17 refs.

  9. Horizontal model fusion paradigm

    NASA Astrophysics Data System (ADS)

    Julier, Simon J.; Durrant-Whyte, Hugh F.

    1996-05-01

    In navigation and tracking problems, the identification of an appropriate model of vehicular or target motion is vital to most practical data fusion algorithms. The true system dynamics are rarely known, and approximations are usually employed. Since systems can exhibit strikingly different behaviors, multiple models may be needed to describe each of these behaviors. Current methods either use model switching (a single process model is chosen from the set using a decision rule) or consider the models as a set of competing hypothesis, only one of which is 'correct'. However, these methods fail to exploit the fact that all models are of the same system and that all of them are, to some degree, 'correct'. In this paper we present a new paradigm for fusing information from a set of multiple process models. The predictions from each process model are regarded as observations which are corrupted by correlated noise. By employing the standard Kalman filter equations we combine data from multiple sensors and multiple process models optimally. There are a number of significant practical advantages to this technique. First, the performance of the system always equals or betters that of the best estimator in the set of models being used. Second, the same decision theoretic machinery can be used to select the process models as well as the sensor suites.

  10. Calibrated Properties Model

    SciTech Connect

    T. Ghezzehej

    2004-10-04

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.

  11. A model of strength

    USGS Publications Warehouse

    Johnson, Douglas H.; Cook, R.D.

    2013-01-01

    In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.

  12. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  13. Physical modelling in biomechanics.

    PubMed Central

    Koehl, M A R

    2003-01-01

    Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350

  14. Programming Models in HPC

    SciTech Connect

    Shipman, Galen M.

    2016-06-13

    These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.

  15. Foam process models.

    SciTech Connect

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  16. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  17. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  18. Ventilation Model Report

    SciTech Connect

    V. Chipman; J. Case

    2002-12-20

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of

  19. Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling.

    PubMed

    Yesson, C; Culham, A

    2006-10-01

    We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of

  20. Constitutive models in LAME.

    SciTech Connect

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented

  1. Modeling and Prediction Overview

    SciTech Connect

    Ermak, D L

    2002-10-18

    Effective preparation for and response to the release of toxic materials into the atmosphere hinges on accurate predictions of the dispersion pathway, concentration, and ultimate fate of the chemical or biological agent. Of particular interest is the threat to civilian populations within major urban areas, which are likely targets for potential attacks. The goals of the CBNP Modeling and Prediction area are: (1) Development of a suite of validated, multi-scale, atmospheric transport and fate modeling capabilities for chemical and biological agent releases within the complex urban environment; (2) Integration of these models and related user tools into operational emergency response systems. Existing transport and fate models are being adapted to treat the complex atmospheric flows within and around structures (e.g., buildings, subway systems, urban areas) and over terrain. Relevant source terms and the chemical and physical behavior of gas- and particle-phase species (e.g., losses due to deposition, bio-agent viability, degradation) are also being developed and incorporated into the models. Model validation is performed using both laboratory and field data. CBNP is producing and testing a suite of models with differing levels of complexity and fidelity to address the full range of user needs and applications. Lumped-parameter transport models are being developed for subway systems and building interiors, supplemented by the use of computational fluid dynamics (CFD) models to describe the circulation within large, open spaces such as auditoriums. Both sophisticated CFD transport models and simpler fast-response models are under development to treat the complex flow around individual structures and arrays of buildings. Urban parameterizations are being incorporated into regional-scale weather forecast, meteorological data assimilation, and dispersion models for problems involving larger-scale urban and suburban areas. Source term and dose response models are being

  2. Drought modeling - A review

    NASA Astrophysics Data System (ADS)

    Mishra, Ashok K.; Singh, Vijay P.

    2011-06-01

    SummaryIn recent years droughts have been occurring frequently, and their impacts are being aggravated by the rise in water demand and the variability in hydro-meteorological variables due to climate change. As a result, drought hydrology has been receiving much attention. A variety of concepts have been applied to modeling droughts, ranging from simplistic approaches to more complex models. It is important to understand different modeling approaches as well as their advantages and limitations. This paper, supplementing the previous paper ( Mishra and Singh, 2010) where different concepts of droughts were highlighted, reviews different methodologies used for drought modeling, which include drought forecasting, probability based modeling, spatio-temporal analysis, use of Global Climate Models (GCMs) for drought scenarios, land data assimilation systems for drought modeling, and drought planning. It is found that there have been significant improvements in modeling droughts over the past three decades. Hybrid models, incorporating large scale climate indices, seem to be promising for long lead-time drought forecasting. Further research is needed to understand the spatio-temporal complexity of droughts under climate change due to changes in spatio-temporal variability of precipitation. Applications of copula based models for multivariate drought characterization seem to be promising for better drought characterization. Research on decision support systems should be advanced for issuing warnings, assessing risk, and taking precautionary measures, and the effective ways for the flow of information from decision makers to users need to be developed. Finally, some remarks are made regarding the future outlook for drought research.

  3. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  4. The Earth System Model

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol

    2003-01-01

    The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.

  5. Geochemical modeling: a review

    SciTech Connect

    Jenne, E.A.

    1981-06-01

    Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.

  6. Improved Screened Hydrogenic Model

    SciTech Connect

    Nishikawa, T.

    1996-05-01

    Screened Hydrogenic Model is widely used for energy level calculation in hydrodynamic code of inertial confinement fusion because Screened Hydrogenic Model is simple algebraic calculation. More{close_quote}s Screened Hydrogenic Model and his screening constants are usually used to calculate opacity and equation of state. By the use of his model, energy level can be consistently calculated with ion{close_quote}s total energy. But his model take into account the principal quantum number dependence only and cannot reproduce hydrogenic energy levels. As the precise experiment about opacity measurement is performed, it becomes clear that his model is not enough to use for opacity calculation. In this paper, his model is improved in the framework of Screened Hydrogenic Model. The improved model can reproduce the hydrogenic energy levels and include azimuthal quantum number dependence and the effect from another quantum state (a kind of inner quantum number). Screening constants are fitted by spectroscopic data and sophisticate calculations. By the use of improved model, energy levels are calculated more accurately for low-{ital z} ions. {copyright} {ital 1996 American Institute of Physics.}

  7. [Mathematical models of hysteresis

    SciTech Connect

    Mayergoyz, I.D.

    1991-01-01

    The research described in this proposal is currently being supported by the US Department of Energy under the contract Mathematical Models of Hysteresis''. Thus, before discussing the proposed research in detail, it is worthwhile to describe and summarize the main results achieved in the course of our work under the above contract. Our ongoing research has largely been focused on the development of mathematical models of hysteretic nonlinearities with nonlocal memories''. The distinct feature of these nonlinearities is that their current states depend on past histories of input variations. It turns out that memories of hysteretic nonlinearities are quite selective. Indeed, experiments show that only some past input extrema leave their marks upon future states of hysteretic nonlinearities. Thus special mathematical tools are needed in order to describe nonlocal selective memories of hysteretic nonlinearities. Our research has been primarily concerned with Preisach-type models of hysteresis. All these models have a common generic feature; they are constructed as superpositions of simplest hysteretic nonlinearities-rectangular loops. Our study has by and large been centered around the following topics: various generalizations and extensions of the classical Preisach model, finding of necessary and sufficient conditions for the representation of actual hysteretic nonlinearities by various Preisach type models, solution of identification problems for these models, numerical implementation and experimental testing of Preisach type models. Although the study of Preisach type models has constituted the main direction of the research, some effort has also been made to establish some interesting connections between these models and such topics as: the critical state model for superconducting hysteresis, the classical Stoner-Wohlfarth model of vector magnetic hysteresis, thermal activation type models for viscosity, magnetostrictive hysteresis and neural networks.

  8. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  9. Modeling of surface reactions

    SciTech Connect

    Ray, T.R.

    1993-01-01

    Mathematical models are used to elucidate properties of the monomer-monomer and monomer-dimer type chemical reactions on a two-dimensional surface. The authors use mean-field and lattice gas models, detailing similarities and differences due to correlations in the lattice gas model. The monomer-monomer, or AB surface reaction model, with no diffusion, is investigated for various reaction rates k. Study of the exact rate equations reveals that poisoning always occurs if the adsorption rates of the reactants are unequal. If the adsorption rates of the reactants are equal, simulations show slow poisoning, associated with clustering of reactants. This behavior is also shown for the two-dimensional voter model. The authors analyze precisely the slow poisoning kinetics by an analytic treatment for the AB reaction with infinitesimal reaction rate, and by direct comparison with the voter model. They extend the results to incorporate the effects of place-exchange diffusion, and they compare the AB reaction with infinitesimal reaction rate and no diffusion to the voter model with diffusion at rate 1/2. They also consider the relationship of the voter model to the monomer-dimer model, and investigate the latter model for small reaction rates. The monomer-dimer, or AB[sub 2] surface reaction model is also investigated. Specifically, they consider the ZGB-model for CO-oxidation, and in generalizations of this model which include adspecies diffusion. A theory of nucleation to describe properties of non-equilibrium first-order transitions, specifically the evolution between [open quote]reactive[close quote] steady states and trivial adsorbing states, is derived. The behavior of the [open quote]epidemic[close quote] survival probability, P[sub s], for a non-poisoned patch surrounded by a poisoned background is determined below the poisoning transition.

  10. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  11. Modeling Imports in a Keynesian Expenditure Model

    ERIC Educational Resources Information Center

    Findlay, David W.

    2010-01-01

    The author discusses several issues that instructors of introductory macroeconomics courses should consider when introducing imports in the Keynesian expenditure model. The analysis suggests that the specification of the import function should partially, if not completely, be the result of a simple discussion about the spending and import…

  12. Animal models of scoliosis.

    PubMed

    Bobyn, Justin D; Little, David G; Gray, Randolph; Schindeler, Aaron

    2015-04-01

    Multiple techniques designed to induce scoliotic deformity have been applied across many animal species. We have undertaken a review of the literature regarding experimental models of scoliosis in animals to discuss their utility in comprehending disease aetiology and treatment. Models of scoliosis in animals can be broadly divided into quadrupedal and bipedal experiments. Quadrupedal models, in the absence of axial gravitation force, depend upon development of a mechanical asymmetry along the spine to initiate a scoliotic deformity. Bipedal models more accurately mimic human posture and consequently are subject to similar forces due to gravity, which have been long appreciated to be a contributing factor to the development of scoliosis. Many effective models of scoliosis in smaller animals have not been successfully translated to primates and humans. Though these models may not clarify the aetiology of human scoliosis, by providing a reliable and reproducible deformity in the spine they are a useful means with which to test interventions designed to correct and prevent deformity.

  13. Energy balance climate models

    NASA Technical Reports Server (NTRS)

    North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.

    1981-01-01

    An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved, and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.

  14. Load Model Data Tool

    SciTech Connect

    David Chassin, Pavel Etingov

    2013-04-30

    The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to be provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.

  15. Outside users payload model

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.

  16. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  17. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  18. Introduction to Biological Models

    DTIC Science & Technology

    2011-05-11

    three steady states if NT > (dZ/αg) + (dP /µ), with a P,Z non-zero point being a stable attractor. NPZD The NPZ model assumes all dead organisms or...excreted material is immediately rem- ineralized to usable nutrient. In contrast, the NPZD model assumes that dead organisms and unassimilated...phytoplankton would contribute to a detrital pool that eventually be- 17 Figure 3: Schematic representation of NPZD model showing the fluxes of biomass and the

  19. Coastal Modeling System

    DTIC Science & Technology

    2014-09-04

    Coastal Inlets Research Program Coastal Modeling System The work unit develops the Coastal Modeling System ( CMS ) and conducts basic research to...further understanding of sediment transport under mixed forcing from waves and currents. The CMS is a suite of coupled two- dimensional numerical...models for simulating waves, hydrodynamics, salinity and sediment transport, and morphology change. The CMS was identified by the USACE Hydraulics and

  20. Acid rain: Mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H. M.

    1980-01-01

    A mesoscale numerical model of the Florida peninsula was formulated and applied to a dry, neutral atmosphere. The prospective use of the STAR-100 computer for the submesoscale model is discussed. The numerical model presented is tested under synoptically undisturbed conditions. Two cases, differing only in the direction of the prevailing geostrophic wind, are examined: a prevailing southwest wind and a prevailing southeast wind, both 6 m/sec at all levels initially.

  1. HOMER® Micropower Optimization Model

    SciTech Connect

    Lilienthal, P.

    2005-01-01

    NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

  2. Los Alamos Programming Models

    SciTech Connect

    Bergen, Benjamin Karl

    2016-07-07

    This is the PDF of a powerpoint presentation from a teleconference on Los Alamos programming models. It starts by listing their assumptions for the programming models and then details a hierarchical programming model at the System Level and Node Level. Then it details how to map this to their internal nomenclature. Finally, a list is given of what they are currently doing in this regard.

  3. AREST model description

    SciTech Connect

    Engel, D.W.; McGrail, B.P.

    1993-11-01

    The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST.

  4. Computer Models of Proteins

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.

  5. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  6. Liftoff Model for MELCOR.

    SciTech Connect

    Young, Michael F.

    2015-07-01

    Aerosol particles that deposit on surfaces may be subsequently resuspended by air flowing over the surface. A review of models for this liftoff process is presented and compared to available data. Based on this review, a model that agrees with existing data and is readily computed is presented for incorporation into a system level code such as MELCOR. Liftoff Model for MELCOR July 2015 4 This page is intentionally blank

  7. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.

  8. The Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles; Saile, Lynn; Freiere deCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David

    2010-01-01

    The goals of the Integrated Medical Model (IMM) are to develop an integrated, quantified, evidence-based decision support tool useful to crew health and mission planners and to help align science, technology, and operational activities intended to optimize crew health, safety, and mission success. Presentation slides address scope and approach, beneficiaries of IMM capabilities, history, risk components, conceptual models, development steps, and the evidence base. Space adaptation syndrome is used to demonstrate the model's capabilities.

  9. Conceptual IT model

    NASA Astrophysics Data System (ADS)

    Arnaoudova, Kristina; Stanchev, Peter

    2015-11-01

    The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.

  10. Compositional Modeling with DPNs

    DTIC Science & Technology

    2007-11-02

    Technical Report Compositional Modeling with DPNs 6. AUTHOR(S) Geoffrey Zweig and Stuart Russell 7. PERFORMING ORGANIZATION NAMES(S) AND ADDRESS(ES...Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 298-102 Compositional Modeling With DPNs Geoffrey Zweig Stuart Russell Report No. UCB...Modeling With DPNs Geoffrey Zweig * Stuart Russell * Sept. 8, 1997 Abstract Dynamic probabilistic networks (DPNs) are a powerful and efficient method for

  11. Numerical Modeling Experiments

    DTIC Science & Technology

    1974-09-01

    presence of clouds is associated with the occurvence of condensation in the atmospheric models. Cloudiness 3t a particulat grid point is introduced -4...when saturation is predicted as a result of either large-scale moisture flux convergence or vertical convective adjustment. In most models such clouds ... cloud top, cloud thickness, and liquid-water content. In some general circulation models the local fractional convective cloud amountv tre taken

  12. Gauge Messenger Models

    SciTech Connect

    Kim, Hyung Do

    2006-11-28

    We consider gauge messenger models in which X and Y gauge bosons and gauginos are messengers of supersymmetry breaking. In simple gauge messenger models, all the soft parameters except {mu} and B{mu} are calculated in terms of a single scale parameter MSUSY which is proportional to F / MGUT. Unique prediction on dark matter in gauge messenger models is discussed. (Based on hep-ph/0601036 and hep-ph/0607169)

  13. Guidelines for Model Evaluation.

    DTIC Science & Technology

    1979-01-01

    by a decisionmaker. The full-scale evaluation of a complex model can be an expensive, time- consuming effort requiring diverse talents and skills...relative to PIES, were documented in a report to the Congress. 2/ An important side- effect of that document was that a foundation was laid for model...while for model evaluation there are no generally accepted standards or methods. Hence, GAO perceives the need to expand upon the lessons learned in

  14. Global Atmospheric Aerosol Modeling

    NASA Technical Reports Server (NTRS)

    Hendricks, Johannes; Aquila, Valentina; Righi, Mattia

    2012-01-01

    Global aerosol models are used to study the distribution and properties of atmospheric aerosol particles as well as their effects on clouds, atmospheric chemistry, radiation, and climate. The present article provides an overview of the basic concepts of global atmospheric aerosol modeling and shows some examples from a global aerosol simulation. Particular emphasis is placed on the simulation of aerosol particles and their effects within global climate models.

  15. Rat Endovascular Perforation Model

    PubMed Central

    Sehba, Fatima A.

    2014-01-01

    Experimental animal models of aneurysmal subarachnoid hemorrhage (SAH) have provided a wealth of information on the mechanisms of brain injury. The Rat endovascular perforation model (EVP) replicates the early pathophysiology of SAH and hence is frequently used to study early brain injury following SAH. This paper presents a brief review of historical development of the EVP model, details the technique used to create SAH and considerations necessary to overcome technical challenges. PMID:25213427

  16. Structural model integrity

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.; Lahey, R. S.; Haggenmacher, G. W.

    1977-01-01

    Many of the practical aspects and problems of ensuring the integrity of a structural model are discussed, as well as the steps which have been taken in the NASTRAN system to assure that these checks can be routinely performed. Model integrity as used applies not only to the structural model but also to the loads applied to the model. Emphasis is also placed on the fact that when dealing with substructure analysis, all of the checking procedures discussed should be applied at the lowest level of substructure prior to any coupling.

  17. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  18. Photovoltaic array performance model.

    SciTech Connect

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  19. Railway switch transport model.

    PubMed

    Horvat, Martin; Prosen, Tomaž; Benenti, Giuliano; Casati, Giulio

    2012-11-01

    We propose a simple model of coupled heat and particle transport based on zero-dimensional classical deterministic dynamics, which is reminiscent of a railway switch whose action is a function only of the particle's energy. It is shown that already in the minimal three-terminal model, where the second terminal is considered as a probe with zero net particle and heat currents, one can find extremely asymmetric Onsager matrices as a consequence of time-reversal symmetry breaking of the model. This minimalistic transport model provides a better understanding of thermoelectric heat engines in the presence of time-reversal symmetry breaking.

  20. Models of Reality.

    SciTech Connect

    Brown-VanHoozer, S. A.

    1999-06-02

    Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.

  1. Lightning return stroke models

    NASA Technical Reports Server (NTRS)

    Lin, Y. T.; Uman, M. A.; Standler, R. B.

    1980-01-01

    We test the two most commonly used lightning return stroke models, Bruce-Golde and transmission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations to the measured two-station fields. Using the new model, we derive return stroke charge and current statistics for about 100 subsequent strokes.

  2. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  3. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  4. Multifamily Envelope Leakage Model

    SciTech Connect

    Faakye, O.; Griffiths, D.

    2015-05-01

    The objective of the 2013 research project was to develop the model for predicting fully guarded test results (FGT), using unguarded test data and specific building features of apartment units. The model developed has a coefficient of determination R2 value of 0.53 with a root mean square error (RMSE) of 0.13. Both statistical metrics indicate that the model is relatively strong. When tested against data that was not included in the development of the model, prediction accuracy was within 19%, which is reasonable given that seasonal differences in blower door measurements can vary by as much as 25%.

  5. Modelling urban snowmelt runoff

    NASA Astrophysics Data System (ADS)

    Valeo, C.; Ho, C. L. I.

    2004-12-01

    Few investigations have been made into modelling snowmelt in urban areas; hence, current urban snowmelt routines have adopted parameters and approaches intended for rural areas that are not appropriate in an urban environment. This paper examines problems with current urban snowmelt models and proposes a model that uses parameters developed from field studies focusing exclusively on urban snow. The Urban Snow Model (USM) uses an energy balance scheme at an hourly time step, changes in urban snow albedo, and incorporates eight different types of redistributed snow cover. USM is tested against observed flow data from a small residential community located in Calgary, Alberta. The degree-day method for snowmelt, the SWMM model, and a modified version of USM that incorporates a partial energy budget scheme relying only on net radiation, are also tested against the observed flow data. The full energy budget version of USM outperformed all other models in terms of time to peak, peak flowrate and model efficiency; however, the modified version of USM fared quite well and is recommended when a lack of data exists. The degree-day method and the SWMM models fared poorly and were unable to simulate peak flowrates in most cases. The tests also demonstrated the need to distribute snow into appropriate snow covers in order to simulate peak flowrates accurately and provide good model efficiency.

  6. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  7. Evolutionary financial market models

    NASA Astrophysics Data System (ADS)

    Ponzi, A.; Aizawa, Y.

    2000-12-01

    We study computer simulations of two financial market models, the second a simplified model of the first. The first is a model of the self-organized formation and breakup of crowds of traders, motivated by the dynamics of competitive evolving systems which shows interesting self-organized critical (SOC)-type behaviour without any fine tuning of control parameters. This SOC-type avalanching and stasis appear as realistic volatility clustering in the price returns time series. The market becomes highly ordered at ‘crashes’ but gradually loses this order through randomization during the intervening stasis periods. The second model is a model of stocks interacting through a competitive evolutionary dynamic in a common stock exchange. This model shows a self-organized ‘market-confidence’. When this is high the market is stable but when it gets low the market may become highly volatile. Volatile bursts rapidly increase the market confidence again. This model shows a phase transition as temperature parameter is varied. The price returns time series in the transition region is very realistic power-law truncated Levy distribution with clustered volatility and volatility superdiffusion. This model also shows generally positive stock cross-correlations as is observed in real markets. This model may shed some light on why such phenomena are observed.

  8. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  9. Convergence of a Moran model to Eigen's quasispecies model.

    PubMed

    Dalmau, Joseba

    2017-03-02

    We prove that a Moran model converges in probability to Eigen's quasispecies model in the infinite population limit. We show further that the invariant probability measure of the Moran model converges to the unique stationary solution of Eigen's quasispecies model.

  10. Anisotropic exchange-interaction model: From the Potts model to the exchange-interaction model

    NASA Astrophysics Data System (ADS)

    King, T. C.; Chen, H. H.

    1995-04-01

    A spin model called the anisotropic exchange-interaction model is proposed. The Potts model, the exchange-interaction model, and the spin-1/2 anisotropic Heisenberg model are special cases of the proposed model. Thermodynamic properties of the model on the bcc and the fcc lattices are determined by the constant-coupling approximation.

  11. Operator spin foam models

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Hellmann, Frank; Kamiński, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

    2011-05-01

    The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin

  12. Why business models matter.

    PubMed

    Magretta, Joan

    2002-05-01

    "Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance.

  13. Biosphere Process Model Report

    SciTech Connect

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  14. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  15. Spiral model pilot project information model

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

  16. Stormwater Management Model

    EPA Science Inventory

    SWMM is a model for urban hydrology. It has a long history and is relied upon by professional engineers in the US and around the world. SWMM provides both gray and green Infrastructure modeling capabilities. As such, it is a convenient tool for understanding the tradeoff between ...

  17. Model for Coastal Restoration

    SciTech Connect

    Thom, Ronald M.; Judd, Chaeli

    2007-07-27

    Successful restoration of wetland habitats depends on both our understanding of our system and our ability to characterize it. By developing a conceptual model, looking at different spatial scales and integrating diverse data streams: GIS datasets and NASA products, we were able to develop a dynamic model for site prioritization based on both qualitative and quantitative relationships found in the coastal environment.

  18. Fictional models in science

    NASA Astrophysics Data System (ADS)

    Morrison, Margaret

    2014-02-01

    When James Clerk Maxwell set out his famous equations 150 years ago, his model of electromagnetism included a piece of pure fiction: an invisible, all-pervasive "aether" made up of elastic vortices separated by electric charges. Margaret Morrison explores how this and other "fictional" models shape science.

  19. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  20. Systematic Eclectic Models.

    ERIC Educational Resources Information Center

    Mahalik, James R.

    1990-01-01

    Presents and evaluates four systematic eclectic models of psychotherapy: Beutler's eclectic psychotherapy; Howard, Nance, and Myers' adaptive counseling and therapy; Lazarus' multimodal therapy; and Prochaska and DiClemente's transtheoretical approach. Examines support for these models and makes conceptual and empirical recommendations.…

  1. The General Teaching Model.

    ERIC Educational Resources Information Center

    Miles, David T.; Robinson, Roger E.

    The General Teaching Model is a procedural guide for the design, implementation, evaluation, and improvement of instruction. The Model is considered applicable to all levels of education, all subject matters, and any length of instructional unit. It consists of four components: 1) instructional objectives, 2) pre-assessment, 3) instructional…

  2. The Teacher Improvement Model.

    ERIC Educational Resources Information Center

    Pokalo, Mariann

    The Teacher Improvement Model was begun as an Organizational Development Project using the parallel systems approach in a school for emotionally disturbed junior high school students. Teachers volunteered for committee work and requested observations and evaluations in an effort to define and establish a discipline model best suited to them. Such…

  3. Modeling Biomimetic Mineralization

    DTIC Science & Technology

    2010-03-02

    combination of computational methods including Molecular Dynamics, Langevin Dynamics, and Monte Carlo, and theories including statistical mechanics and...treated by a combination of computational methods including Molecular Dynamics, Langevin Dynamics, and Monte Carlo, and theories including...at the interfaces are neglected in our computer modeling so far. Force-field Molecular model Molecular dynamics Langevin dynamics Brownian

  4. Modeling Antibody Diversity.

    ERIC Educational Resources Information Center

    Baker, William P.; Moore, Cathy Ronstadt

    1998-01-01

    Understanding antibody structure and function is difficult for many students. The rearrangement of constant and variable regions during antibody differentiation can be effectively simulated using a paper model. Describes a hands-on laboratory exercise which allows students to model antibody diversity using readily available resources. (PVD)

  5. Modelling with Magnets.

    ERIC Educational Resources Information Center

    Gabel, Dorothy; And Others

    1992-01-01

    Chemistry can be described on three levels: sensory, molecular, and symbolic. Proposes a particle approach to teaching chemistry that uses magnets to aid students construct molecular models and solve particle problems. Includes examples of Johnstone's model of chemistry phenomena, a problem worksheet, and a student concept mastery sheet. (MDH)

  6. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  7. Acid rain: Microphysical model

    NASA Technical Reports Server (NTRS)

    Dingle, A. N.

    1980-01-01

    A microphysical model was used to simulate the case of a ground cloud without dilution by entrainment and without precipitation. The numerical integration techniques of the model are presented. The droplet size spectra versus time and the droplet molalities for each value of time are discussed.

  8. Modeling Water Filtration

    ERIC Educational Resources Information Center

    Parks, Melissa

    2014-01-01

    Model-eliciting activities (MEAs) are not new to those in engineering or mathematics, but they were new to Melissa Parks. Model-eliciting activities are simulated real-world problems that integrate engineering, mathematical, and scientific thinking as students find solutions for specific scenarios. During this process, students generate solutions…

  9. Models in Biology.

    ERIC Educational Resources Information Center

    Flannery, Maura C.

    1997-01-01

    Addresses the most popular models currently being chosen for biological research and the reasons behind those choices. Among the current favorites are zebra fish, fruit flies, mice, monkeys, and yeast. Concludes with a brief examination of the ethical issues involved, and why some animals may need to be replaced in research with model systems.…

  10. Writing Models, Versatile Writers.

    ERIC Educational Resources Information Center

    VanDeWeghe, Richard

    1983-01-01

    Presents five research-based writing models to help student writers analyze their composition processes: (1) discovery, (2) direct writing process, (3) five-stage process, (4) write-talk-write, and (5) four key questions. Discusses advantages and disadvantages of each model. (PD)

  11. Model State Efforts.

    ERIC Educational Resources Information Center

    Morgan, Gwen

    Models of state involvement in training child care providers are briefly discussed and the employers' role in training is explored. Six criteria for states that are taken as models are identified, and four are described. Various state activities are described for each criterion. It is noted that little is known about employer and other private…

  12. Rotorwash Operational Footprint Modeling

    DTIC Science & Technology

    2014-07-01

    lower drag coefficient . The developed analytical model for predicting unbalance is shown to correlate relatively well with the test data. This model... coefficients and exponents within the equations. Correlation of RoWFoot to flight test data is fundamental to quantitatively trying to determine the...using empirically derived coefficients and exponents through feedback from an iterative flight test correlation process. Rotorwash Operational

  13. pylightcurve: Exoplanet lightcurve model

    NASA Astrophysics Data System (ADS)

    Tsiaras, A.; Waldmann, I. P.; Rocchetto, M.; Varley, R.; Morello, G.; Damiano, M.; Tinetti, G.

    2016-12-01

    pylightcurve is a model for light-curves of transiting planets. It uses the four coefficients law for the stellar limb darkening and returns the relative flux, F(t), as a function of the limb darkening coefficients, an, the Rp/R* ratio and all the orbital parameters based on the nonlinear limb darkening model (Claret 2000).

  14. Legal Policy Optimizing Models

    ERIC Educational Resources Information Center

    Nagel, Stuart; Neef, Marian

    1977-01-01

    The use of mathematical models originally developed by economists and operations researchers is described for legal process research. Situations involving plea bargaining, arraignment, and civil liberties illustrate the applicability of decision theory, inventory modeling, and linear programming in operations research. (LBH)

  15. SOSS ICN Model Validation

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan

    2016-01-01

    Under the NASA-KAIA-KARI ATM research collaboration agreement, SOSS ICN Model has been developed for Incheon International Airport. This presentation describes the model validation work in the project. The presentation will show the results and analysis of the validation.

  16. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  17. Computer Model Documentation Guide.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    These guidelines for communicating effectively the details of computer model design and operation to persons with varying interests in a model recommend the development of four different types of manuals to meet the needs of managers, users, analysts and programmers. The guidelines for preparing a management summary manual suggest a broad spectrum…

  18. Modeling the Teaching Consultant.

    ERIC Educational Resources Information Center

    Johnson, Brian L.; And Others

    1990-01-01

    Discusses the teaching consultant process in computer programing courses, describes a teaching consultant model from both the teachers' and students' perspectives, and shows how this model can be used to develop an intelligent teaching consultant (ITC). Differences between this collection of expert systems and conventional intelligent tutoring…

  19. Modelling Rating Scales.

    ERIC Educational Resources Information Center

    Linacre, John M.

    Determination of the intentions of the test developer is fundamental to the choice of the analytical model for a rating scale. For confirmatory analysis, the developer's intentions inform the choice of the general form of the model, representing the manner in which the respondent interacts with the scale; these intentions also inform the choice of…

  20. Models and Metaphors

    ERIC Educational Resources Information Center

    Ivie, Stanley D.

    2007-01-01

    Humanity delights in spinning conceptual models of the world. These models, in turn, mirror their respective root metaphors. Three root metaphors--spiritual, organic, and mechanical--have dominated western thought. The spiritual metaphor runs from Plato, through Hegel, and connects with Montessori. The organic metaphor extends from Aristotle,…

  1. Modeling HIV Cure

    NASA Astrophysics Data System (ADS)

    Perelson, Alan; Conway, Jessica; Cao, Youfang

    A large effort is being made to find a means to cure HIV infection. I will present a dynamical model of post-treatment control (PTC) or ``functional cure'' of HIV-infection. Some patients treated with suppressive antiviral therapy have been taken off of therapy and then spontaneously control HIV infection such that the amount of virus in the circulation is maintained undetectable by clinical assays for years. The model explains PTC occurring in some patients by having a parameter regime in which the model exhibits bistability, with both a low and high steady state viral load being stable. The model makes a number of predictions about how to attain the low PTC steady state. Bistability in this model depends upon the immune response becoming exhausted when over stimulated. I will also present a generalization of the model in which immunotherapy can be used to reverse immune exhaustion and compare model predictions with experiments in SIV infected macaques given immunotherapy and then taken off of antiretroviral therapy. Lastly, if time permits, I will discuss one of the hurdles to true HIV eradication, latently infected cells, and present clinical trial data and a new model addressing pharmacological means of flushing out the latent reservoir. Supported by NIH Grants AI028433 and OD011095.

  2. A night sky model.

    NASA Astrophysics Data System (ADS)

    Erpylev, N. P.; Smirnov, M. A.; Bagrov, A. V.

    A night sky model is proposed. It includes different components of light polution, such as solar twilight, moon scattered light, zodiacal light, Milky Way, air glow and artificial light pollution. The model is designed for calculating the efficiency of astronomical installations.

  3. PROSTATE REGULATION: MODELING ENDOGENOUS ...

    EPA Pesticide Factsheets

    Prostate function is an important indicator of androgen status in toxicological studies making predictive modeling of the relevant pharmacokinetics and pharmacodynamics desirable. Prostate function is an important indicator of androgen status in toxicological studies making predictive modeling of the relevant pharmacokinetics and pharmacodynamics desirable.

  4. Dynamic Eye Model.

    ERIC Educational Resources Information Center

    Journal of Science and Mathematics Education in Southeast Asia, 1981

    1981-01-01

    Instructions (with diagrams and parts list) are provided for constructing an eye model with a pliable lens made from a plastic bottle which can vary its convexity to accommodate changing positions of an object being viewed. Also discusses concepts which the model can assist in developing. (Author/SK)

  5. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  6. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  7. Model Reading Institute.

    ERIC Educational Resources Information Center

    Dworkin, Nancy; Dworkin, Yehoash

    The 1978 Summer Reading Institute, which served 58 Washington, D.C., elementary school children, is described in this paper. Major characteristics of the program model are first identified, along with elements that were added to the model in the preplanning stage. Numerous aspects of the program are then described, including the make-up of the…

  8. Modeling for Insights

    SciTech Connect

    Jacob J. Jacobson; Gretchen Matthern

    2007-04-01

    System Dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, System Dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The real power of System Dynamic modeling is gaining insights into total system behavior as time, and system parameters are adjusted and the effects are visualized in real time. System Dynamic models allow decision makers and stakeholders to explore long-term behavior and performance of complex systems, especially in the context of dynamic processes and changing scenarios without having to wait decades to obtain field data or risk failure if a poor management or design approach is used. The Idaho National Laboratory recently has been developing a System Dynamic model of the US Nuclear Fuel Cycle. The model is intended to be used to identify and understand interactions throughout the entire nuclear fuel cycle and suggest sustainable development strategies. This paper describes the basic framework of the current model and presents examples of useful insights gained from the model thus far with respect to sustainable development of nuclear power.

  9. Teaching Mathematical Modelling.

    ERIC Educational Resources Information Center

    Jones, Mark S.

    1997-01-01

    Outlines a course at the University of Glamorgan in the United Kingdom in which a computer algebra system (CAS) teaches mathematical modeling. The format is based on continual assessment of group and individual work stating the problem, a feature list, and formulation of the models. No additional mathematical word processing package is necessary.…

  10. Computational Modeling of Tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)

    1995-01-01

    This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.

  11. Connectionist Modelling and Education.

    ERIC Educational Resources Information Center

    Evers, Colin W.

    2000-01-01

    Provides a detailed, technical introduction to the state of cognitive science research, in particular the rise of the "new cognitive science," especially artificial neural net (ANN) models. Explains one influential ANN model and describes diverse applications and their implications for education. (EV)

  12. Model for Contingency Contracting.

    ERIC Educational Resources Information Center

    Prince George's Community Coll., Largo, MD. Dept. of Human Development.

    The Department of Human Development at Prince George's Community College has developed a contingency contracting model for the department's counseling and student services which presents planning and evaluation as interrelated parts of the same process of self-imposed accountability. The implementation of the model consisted of: (1) planning…

  13. Structural Equation Model Trees

    ERIC Educational Resources Information Center

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  14. Anticipatory model of cavitation

    NASA Astrophysics Data System (ADS)

    Kercel, Stephen W.; Allgood, Glenn O.; Dress, William B.; Hylton, James O.

    1999-03-01

    The Anticipatory System (AS) formalism developed by Robert Rosen provides some insight into the problem of embedding intelligent behavior in machines. AS emulates the anticipatory behavior of biological systems. AS bases its behavior on its expectations about the near future and those expectations are modified as the system gains experience. The expectation is based on an internal model that is drawn from an appeal to physical reality. To be adaptive, the model must be able to update itself. To be practical, the model must run faster than real-time. The need for a physical model and the requirement that the model execute at extreme speeds, has held back the application of AS to practical problems. Two recent advances make it possible to consider the use of AS for practical intelligent sensors. First, advances in transducer technology make it possible to obtain previously unavailable data from which a model can be derived. For example, acoustic emissions (AE) can be fed into a Bayesian system identifier that enables the separation of a weak characterizing signal, such as the signature of pump cavitation precursors, from a strong masking signal, such as a pump vibration feature. The second advance is the development of extremely fast, but inexpensive, digital signal processing hardware on which it is possible to run an adaptive Bayesian-derived model faster than real-time. This paper reports the investigation of an AS using a model of cavitation based on hydrodynamic principles and Bayesian analysis of data from high-performance AE sensors.

  15. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  16. Modeling Carbon Exchange

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Model results will be reviewed to assess different methods for bounding the terrestrial role in the global carbon cycle. It is proposed that a series of climate model runs could be scoped that would tighten the limits on the "missing sink" of terrestrial carbon and could also direct future satellite image analyses to search for its geographical location and understand its seasonal dynamics.

  17. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  18. Updating Situation Models

    ERIC Educational Resources Information Center

    Zwaan, Rolf A.; Madden, Carol J.

    2004-01-01

    The authors examined how situation models are updated during text comprehension. If comprehenders keep track of the evolving situation, they should update their models such that the most current information, the here and now, is more available than outdated information. Contrary to this updating hypothesis, E. J. O'Brien, M. L. Rizzella, J. E.…

  19. Solar Atmosphere Models

    NASA Astrophysics Data System (ADS)

    Rutten, R. J.

    2002-12-01

    This contribution honoring Kees de Jager's 80th birthday is a review of "one-dimensional" solar atmosphere modeling that followed on the initial "Utrecht Reference Photosphere" of Heintze, Hubenet & de Jager (1964). My starting point is the Bilderberg conference, convened by de Jager in 1967 at the time when NLTE radiative transfer theory became mature. The resulting Bilderberg model was quickly superseded by the HSRA and later by the VAL-FAL sequence of increasingly sophisticated NLTE continuum-fitting models from Harvard. They became the "standard models" of solar atmosphere physics, but Holweger's relatively simple LTE line-fitting model still persists as a favorite of solar abundance determiners. After a brief model inventory I discuss subsequent work on the major modeling issues (coherency, NLTE, dynamics) listed as to-do items by de Jager in 1968. The present conclusion is that one-dimensional modeling recovers Schwarzschild's (1906) finding that the lower solar atmosphere is grosso modo in radiative equilibrium. This is a boon for applications regarding the solar atmosphere as one-dimensional stellar example - but the real sun, including all the intricate phenomena that now constitute the mainstay of solar physics, is vastly more interesting.

  20. Animal models for osteoporosis

    NASA Technical Reports Server (NTRS)

    Turner, R. T.; Maran, A.; Lotinun, S.; Hefferan, T.; Evans, G. L.; Zhang, M.; Sibonga, J. D.

    2001-01-01

    Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge.

  1. THE AQUATOX MODEL

    EPA Science Inventory

    This lecture will present AQUATOX, an aquatic ecosystem simulation model developed by Dr. Dick Park and supported by the U.S. EPA. The AQUATOX model predicts the fate of various pollutants, such as nutrients and organic chemicals, and their effects on the ecosystem, including fi...

  2. Modelling University Governance

    ERIC Educational Resources Information Center

    Trakman, Leon

    2008-01-01

    Twentieth century governance models used in public universities are subject to increasing doubt across the English-speaking world. Governments question if public universities are being efficiently governed; if their boards of trustees are adequately fulfilling their trust obligations towards multiple stakeholders; and if collegial models of…

  3. Model-Based Reasoning

    ERIC Educational Resources Information Center

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  4. Earth and ocean modeling

    NASA Technical Reports Server (NTRS)

    Knezovich, F. M.

    1976-01-01

    A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.

  5. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  6. On Some Electroconvection Models

    NASA Astrophysics Data System (ADS)

    Constantin, Peter; Elgindi, Tarek; Ignatova, Mihaela; Vicol, Vlad

    2017-02-01

    We consider a model of electroconvection motivated by studies of the motion of a two-dimensional annular suspended smectic film under the influence of an electric potential maintained at the boundary by two electrodes. We prove that this electroconvection model has global in time unique smooth solutions.

  7. Tacit Models and Infinity.

    ERIC Educational Resources Information Center

    Fischbein, Efraim

    2001-01-01

    Analyses several examples of tacit influences exerted by mental models on the interpretation of various mathematical concepts in the domain of actual infinity. Specifically addresses the unconscious effect of the figural-pictorial models of statements related to the infinite sets of geometrical points related to the concepts of function and…

  8. Dual-Schemata Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tadahiro; Sawaragi, Tetsuo

    In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.

  9. Using Models Effectively

    ERIC Educational Resources Information Center

    Eichinger, John

    2005-01-01

    Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…

  10. Australia's Next Top Fraction Model

    ERIC Educational Resources Information Center

    Gould, Peter

    2013-01-01

    Peter Gould suggests Australia's next top fraction model should be a linear model rather than an area model. He provides a convincing argument and gives examples of ways to introduce a linear model in primary classrooms.

  11. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  12. Animal models of tinnitus.

    PubMed

    Brozoski, Thomas J; Bauer, Carol A

    2016-08-01

    Presented is a thematic review of animal tinnitus models from a functional perspective. Chronic tinnitus is a persistent subjective sound sensation, emergent typically after hearing loss. Although the sensation is experientially simple, it appears to have central a nervous system substrate of unexpected complexity that includes areas outside of those classically defined as auditory. Over the past 27 years animal models have significantly contributed to understanding tinnitus' complex neurophysiology. In that time, a diversity of models have been developed, each with its own strengths and limitations. None has clearly become a standard. Animal models trace their origin to the 1988 experiments of Jastreboff and colleagues. All subsequent models derive some of their features from those experiments. Common features include behavior-dependent psychophysical determination, acoustic conditions that contrast objective sound and silence, and inclusion of at least one normal-hearing control group. In the present review, animal models have been categorized as either interrogative or reflexive. Interrogative models use emitted behavior under voluntary control to indicate hearing. An example would be pressing a lever to obtain food in the presence of a particular sound. In this type of model animals are interrogated about their auditory sensations, analogous to asking a patient, "What do you hear?" These models require at least some training and motivation management, and reflect the perception of tinnitus. Reflexive models, in contrast, employ acoustic modulation of an auditory reflex, such as the acoustic startle response. An unexpected loud sound will elicit a reflexive motor response from many species, including humans. Although involuntary, acoustic startle can be modified by a lower-level preceding event, including a silent sound gap. Sound-gap modulation of acoustic startle appears to discriminate tinnitus in animals as well as humans, and requires no training or

  13. Kalman filter modeling

    NASA Technical Reports Server (NTRS)

    Brown, R. G.

    1984-01-01

    The formulation of appropriate state-space models for Kalman filtering applications is studied. The so-called model is completely specified by four matrix parameters and the initial conditions of the recursive equations. Once these are determined, the die is cast, and the way in which the measurements are weighted is determined foreverafter. Thus, finding a model that fits the physical situation at hand is all important. Also, it is often the most difficult aspect of designing a Kalman filter. Formulation of discrete state models from the spectral density and ARMA random process descriptions is discussed. Finally, it is pointed out that many common processes encountered in applied work (such as band-limited white noise) simply do not lend themselves very well to Kalman filter modeling.

  14. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  15. VENTILATION MODEL REPORT

    SciTech Connect

    V. Chipman

    2002-10-31

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.

  16. Open ocean tide modelling

    NASA Technical Reports Server (NTRS)

    Parke, M. E.

    1978-01-01

    Two trends evident in global tidal modelling since the first GEOP conference in 1972 are described. The first centers on the incorporation of terms for ocean loading and gravitational self attraction into Laplace's tidal equations. The second centers on a better understanding of the problem of near resonant modelling and the need for realistic maps of tidal elevation for use by geodesists and geophysicists. Although new models still show significant differences, especially in the South Atlantic, there are significant similarities in many of the world's oceans. This allows suggestions to be made for future locations for bottom pressure gauge measurements. Where available, estimates of M2 tidal dissipation from the new models are significantly lower than estimates from previous models.

  17. Stratiform chromite deposit model

    USGS Publications Warehouse

    Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R., II

    2010-01-01

    Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.

  18. Molecular modeling of peptides.

    PubMed

    Kuczera, Krzysztof

    2015-01-01

    This article presents a review of the field of molecular modeling of peptides. The main focus is on atomistic modeling with molecular mechanics potentials. The description of peptide conformations and solvation through potentials is discussed. Several important computer simulation methods are briefly introduced, including molecular dynamics, accelerated sampling approaches such as replica-exchange and metadynamics, free energy simulations and kinetic network models like Milestoning. Examples of recent applications for predictions of structure, kinetics, and interactions of peptides with complex environments are described. The reliability of current simulation methods is analyzed by comparison of computational predictions obtained using different models with each other and with experimental data. A brief discussion of coarse-grained modeling and future directions is also presented.

  19. Protein Model Database

    SciTech Connect

    Fidelis, K; Adzhubej, A; Kryshtafovych, A; Daniluk, P

    2005-02-23

    The phenomenal success of the genome sequencing projects reveals the power of completeness in revolutionizing biological science. Currently it is possible to sequence entire organisms at a time, allowing for a systemic rather than fractional view of their organization and the various genome-encoded functions. There is an international plan to move towards a similar goal in the area of protein structure. This will not be achieved by experiment alone, but rather by a combination of efforts in crystallography, NMR spectroscopy, and computational modeling. Only a small fraction of structures are expected to be identified experimentally, the remainder to be modeled. Presently there is no organized infrastructure to critically evaluate and present these data to the biological community. The goal of the Protein Model Database project is to create such infrastructure, including (1) public database of theoretically derived protein structures; (2) reliable annotation of protein model quality, (3) novel structure analysis tools, and (4) access to the highest quality modeling techniques available.

  20. Strength Modeling Report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.; Lee, P.; Wong, S.

    1985-01-01

    Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.

  1. Cardiovascular modeling and diagnostics

    SciTech Connect

    Kangas, L.J.; Keller, P.E.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper, a novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  2. Integrated Workforce Modeling System

    NASA Technical Reports Server (NTRS)

    Moynihan, Gary P.

    2000-01-01

    There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.

  3. Criticality Model Report

    SciTech Connect

    J.M. Scaglione

    2003-03-12

    The purpose of the ''Criticality Model Report'' is to validate the MCNP (CRWMS M&O 1998h) code's ability to accurately predict the effective neutron multiplication factor (k{sub eff}) for a range of conditions spanned by various critical configurations representative of the potential configurations commercial reactor assemblies stored in a waste package may take. Results of this work are an indication of the accuracy of MCNP for calculating eigenvalues, which will be used as input for criticality analyses for spent nuclear fuel (SNF) storage at the proposed Monitored Geologic Repository. The scope of this report is to document the development and validation of the criticality model. The scope of the criticality model is only applicable to commercial pressurized water reactor fuel. Valid ranges are established as part of the validation of the criticality model. This model activity follows the description in BSC (2002a).

  4. A PRELIMINARY JUPITER MODEL

    SciTech Connect

    Hubbard, W. B.; Militzer, B.

    2016-03-20

    In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second- and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.

  5. Modeling of Plasmasphere

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeto; Kumamoto, Atsushi; Kakinami, Yoshihiro

    2016-07-01

    begin{center} {bf Modeling of Plasmasphere} end{center} Electron density at altitudes below 10,000km is estimated from upper-hybrid resonance (UHR) emission observed by the plasma wave and sounder experiments (PWS) on Akebono satellite from February 22, 1989 to April 23, 2015. The electron density of plasmasphere is investigated statistically and compared with International Reference Ionosphere model. We have made an empirical model of electron density of plasmasphere at altitudes between 1000 km and 10000 km. The electron density distribution is also compared with a physical model (Plasmasphere Thermosphere model: PTM) developed in Japan. The electron densities by the Akebono satellite and the PTM show clearly density gradient change at altitude of 1500 km and plasmapause. The density gradient change at 1500 km altitude is corresponding to transition height from O+ to H+. The electron density distribution of plasmasphere shows clearly local time, latitude, season, solar activity and magnetic activity dependences.

  6. Varicella infection modeling.

    SciTech Connect

    Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.; Nozick, Linda Karen; Martin, Nathaniel; Bandlow, Alisa; Detry, Richard Joseph; Evans, Leland B.; Berger, Taylor Eugen

    2013-09-01

    Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.

  7. Modeling glacial climates

    NASA Technical Reports Server (NTRS)

    North, G. R.; Crowley, T. J.

    1984-01-01

    Mathematical climate modelling has matured as a discipline to the point that it is useful in paleoclimatology. As an example a new two dimensional energy balance model is described and applied to several problems of current interest. The model includes the seasonal cycle and the detailed land-sea geographical distribution. By examining the changes in the seasonal cycle when external perturbations are forced upon the climate system it is possible to construct hypotheses about the origin of midlatitude ice sheets and polar ice caps. In particular the model predicts a rather sudden potential for glaciation over large areas when the Earth's orbital elements are only slightly altered. Similarly, the drift of continents or the change of atmospheric carbon dioxide over geological time induces radical changes in continental ice cover. With the advance of computer technology and improved understanding of the individual components of the climate system, these ideas will be tested in far more realistic models in the near future.

  8. Conditional statistical model building

    NASA Astrophysics Data System (ADS)

    Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus

    2008-03-01

    We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.

  9. XAFS Model Compound Library

    DOE Data Explorer

    Newville, Matthew

    The XAFS Model Compound Library contains XAFS data on model compounds. The term "model" compounds refers to compounds of homogeneous and well-known crystallographic or molecular structure. Each data file in this library has an associated atoms.inp file that can be converted to a feff.inp file using the program ATOMS. (See the related Searchable Atoms.inp Archive at http://cars9.uchicago.edu/~newville/adb/) This Library exists because XAFS data on model compounds is useful for several reasons, including comparing to unknown data for "fingerprinting" and testing calculations and analysis methods. The collection here is currently limited, but is growing. The focus to date has been on inorganic compounds and minerals of interest to the geochemical community. [Copied, with editing, from http://cars9.uchicago.edu/~newville/ModelLib/

  10. Global ice sheet modeling

    SciTech Connect

    Hughes, T.J.; Fastook, J.L.

    1994-05-01

    The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed.

  11. Beyond the Standard Model

    SciTech Connect

    Peskin, M.E.

    1997-05-01

    These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.

  12. Proton channel models

    PubMed Central

    Pupo, Amaury; Baez-Nieto, David; Martínez, Agustín; Latorre, Ramón; González, Carlos

    2014-01-01

    Voltage-gated proton channels are integral membrane proteins with the capacity to permeate elementary particles in a voltage and pH dependent manner. These proteins have been found in several species and are involved in various physiological processes. Although their primary topology is known, lack of details regarding their structures in the open conformation has limited analyses toward a deeper understanding of the molecular determinants of their function and regulation. Consequently, the function-structure relationships have been inferred based on homology models. In the present work, we review the existing proton channel models, their assumptions, predictions and the experimental facts that support them. Modeling proton channels is not a trivial task due to the lack of a close homolog template. Hence, there are important differences between published models. This work attempts to critically review existing proton channel models toward the aim of contributing to a better understanding of the structural features of these proteins. PMID:24755912

  13. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  14. Slim Battery Modelling Features

    NASA Astrophysics Data System (ADS)

    Borthomieu, Y.; Prevot, D.

    2011-10-01

    Saft has developed a life prediction model for VES and MPS cells and batteries. The Saft Li-ion Model (SLIM) is a macroscopic electrochemical model based on energy (global at cell level). The main purpose is to predict the battery performances during the life for GEO, MEO and LEO missions. This model is based on electrochemical characteristics such as Energy, Capacity, EMF, Internal resistance, end of charge voltage. It uses fading and calendar law effects on energy and internal impedance vs. time, temperature, End of Charge voltage. Based on the mission profile, satellite power system characteristics, the model proposes the various battery configurations. For each configuration, the model gives the battery performances using mission figures and profiles: power, duration, DOD, end of charge voltages, temperatures during eclipses and solstices, thermal dissipations and cell failures. For the GEO/MEO missions, eclipse and solstice periods can include specific profile such as plasmic propulsion fires and specific balancing operations. For LEO missions, the model is able to simulate high power peaks to predict radar pulses. Saft's main customers have been using the SLIM model available in house for two years. The purpose is to have the satellite builder power engineers able to perform by themselves in the battery pre-dimensioning activities their own battery simulations. The simulations can be shared with Saft engineers to refine the power system designs. This model has been correlated with existing life and calendar tests performed on all the VES and MPS cells. In comparing with more than 10 year lasting life tests, the accuracy of the model from a voltage point of view is less than 10 mV at end Of Life. In addition, thethe comparison with in-orbit data has been also done. b This paper will present the main features of the SLIM software and outputs comparison with real life tests. b0

  15. Saturn Radiation (SATRAD) Model

    NASA Technical Reports Server (NTRS)

    Garrett, H. B.; Ratliff, J. M.; Evans, R. W.

    2005-01-01

    The Saturnian radiation belts have not received as much attention as the Jovian radiation belts because they are not nearly as intense-the famous Saturnian particle rings tend to deplete the belts near where their peak would occur. As a result, there has not been a systematic development of engineering models of the Saturnian radiation environment for mission design. A primary exception is that of Divine (1990). That study used published data from several charged particle experiments aboard the Pioneer 1 1, Voyager 1, and Voyager 2 spacecraft during their flybys at Saturn to generate numerical models for the electron and proton radiation belts between 2.3 and 13 Saturn radii. The Divine Saturn radiation model described the electron distributions at energies between 0.04 and 10 MeV and the proton distributions at energies between 0.14 and 80 MeV. The model was intended to predict particle intensity, flux, and fluence for the Cassini orbiter. Divine carried out hand calculations using the model but never formally developed a computer program that could be used for general mission analyses. This report seeks to fill that void by formally developing a FORTRAN version of the model that can be used as a computer design tool for missions to Saturn that require estimates of the radiation environment around the planet. The results of that effort and the program listings are presented here along with comparisons with the original estimates carried out by Divine. In addition, Pioneer and Voyager data were scanned in from the original references and compared with the FORTRAN model s predictions. The results were statistically analyzed in a manner consistent with Divine s approach to provide estimates of the ability of the model to reproduce the original data. Results of a formal review of the model by a panel of experts are also presented. Their recommendations for further tests, analyses, and extensions to the model are discussed.

  16. Symbolic modeling of epistasis.

    PubMed

    Moore, Jason H; Barney, Nate; Tsai, Chia-Ti; Chiang, Fu-Tien; Gui, Jiang; White, Bill C

    2007-01-01

    The workhorse of modern genetic analysis is the parametric linear model. The advantages of the linear modeling framework are many and include a mathematical understanding of the model fitting process and ease of interpretation. However, an important limitation is that linear models make assumptions about the nature of the data being modeled. This assumption may not be realistic for complex biological systems such as disease susceptibility where nonlinearities in the genotype to phenotype mapping relationship that result from epistasis, plastic reaction norms, locus heterogeneity, and phenocopy, for example, are the norm rather than the exception. We have previously developed a flexible modeling approach called symbolic discriminant analysis (SDA) that makes no assumptions about the patterns in the data. Rather, SDA lets the data dictate the size, shape, and complexity of a symbolic discriminant function that could include any set of mathematical functions from a list of candidates supplied by the user. Here, we outline a new five step process for symbolic model discovery that uses genetic programming (GP) for coarse-grained stochastic searching, experimental design for parameter optimization, graphical modeling for generating expert knowledge, and estimation of distribution algorithms for fine-grained stochastic searching. Finally, we introduce function mapping as a new method for interpreting symbolic discriminant functions. We show that function mapping when combined with measures of interaction information facilitates statistical interpretation by providing a graphical approach to decomposing complex models to highlight synergistic, redundant, and independent effects of polymorphisms and their composite functions. We illustrate this five step SDA modeling process with a real case-control dataset.

  17. Selected Logistics Models and Techniques.

    DTIC Science & Technology

    1984-09-01

    Programmable Calculator LCC...Program 27 TI-59 Programmable Calculator LCC Model 30 Unmanned Spacecraft Cost Model 31 iv I: TABLE OF CONTENTS (CONT’D) (Subject Index) LOGISTICS...34"" - % - "° > - " ° .° - " .’ > -% > ]*° - LOGISTICS ANALYSIS MODEL/TECHNIQUE DATA MODEL/TECHNIQUE NAME: TI-59 Programmable Calculator LCC Model TYPE MODEL: Cost Estimating DEVELOPED BY:

  18. Modeling birds on wires.

    PubMed

    Aydoğdu, A; Frasca, P; D'Apice, C; Manzo, R; Thornton, J M; Gachomo, B; Wilson, T; Cheung, B; Tariq, U; Saidel, W; Piccoli, B

    2017-02-21

    In this paper we introduce a mathematical model to study the group dynamics of birds resting on wires. The model is agent-based and postulates attraction-repulsion forces between the interacting birds: the interactions are "topological", in the sense that they involve a given number of neighbors irrespective of their distance. The model is first mathematically analyzed and then simulated to study its main properties: we observe that the model predicts birds to be more widely spaced near the borders of each group. We compare the results from the model with experimental data, derived from the analysis of pictures of pigeons and starlings taken in New Jersey: two different image elaboration protocols allow us to establish a good agreement with the model and to quantify its main parameters. We also discuss the potential handedness of the birds, by analyzing the group organization features and the group dynamics at the arrival of new birds. Finally, we propose a more refined mathematical model that describes landing and departing birds by suitable stochastic processes.

  19. The timbre model

    NASA Astrophysics Data System (ADS)

    Jensen, Kristoffer

    2002-11-01

    A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.

  20. Modelling urban growth patterns

    NASA Astrophysics Data System (ADS)

    Makse, Hernán A.; Havlin, Shlomo; Stanley, H. Eugene

    1995-10-01

    CITIES grow in a way that might be expected to resemble the growth of two-dimensional aggregates of particles, and this has led to recent attempts1á¤-3 to model urban growth using ideas from the statistical physics of clusters. In particular, the model of diffusion-limited aggregation4,5 (DLA) has been invoked to rationalize the apparently fractal nature of urban morphologies1. The DLA model predicts that there should exist only one large fractal cluster, which is almost perfectly screened from incoming á¤~development unitsᤙ (representing, for example, people, capital or resources), so that almost all of the cluster growth takes place at the tips of the clusterᤙs branches. Here we show that an alternative model, in which development units are correlated rather than being added to the cluster at random, is better able to reproduce the observed morphology of cities and the area distribution of sub-clusters (á¤~towns') in an urban system, and can also describe urban growth dynamics. Our physical model, which corresponds to the correlated percolation model6á¤-8 in the presence of a density gradient9, is motivated by the fact that in urban areas development attracts further development. The model offers the possibility of predicting the global properties (such as scaling behaviour) of urban morphologies.

  1. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  2. Invertebrate models of alcoholism.

    PubMed

    Scholz, Henrike; Mustard, Julie A

    2013-01-01

    For invertebrates to become useful models for understanding the genetic and physiological mechanisms of alcoholism related behaviors and the predisposition towards alcoholism, several general requirements must be fulfilled. The animal should encounter ethanol in its natural habitat, so that the central nervous system of the organism will have evolved mechanisms for responding to ethanol exposure. How the brain adapts to ethanol exposure depends on its access to ethanol, which can be regulated metabolically and/or by physical barriers. Therefore, a model organism should have metabolic enzymes for ethanol degradation similar to those found in humans. The neurons and supporting glial cells of the model organism that regulate behaviors affected by ethanol should share the molecular and physiological pathways found in humans, so that results can be compared. Finally, the use of invertebrate models should offer advantages over traditional model systems and should offer new insights into alcoholism-related behaviors. In this review we will summarize behavioral similarities and identified genes and mechanisms underlying ethanol-induced behaviors in invertebrates. This review mainly focuses on the use of the nematode Caenorhabditis elegans, the honey bee Apis mellifera and the fruit fly Drosophila melanogaster as model systems. We will discuss insights gained from those studies in conjunction with their vertebrate model counterparts and the implications for future research into alcoholism and alcohol-induced behaviors.

  3. Learning planar Ising models

    SciTech Connect

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; Netrapalli, Praneeth

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the best planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.

  4. Learning planar Ising models

    DOE PAGES

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  5. Model molecules mimicking asphaltenes.

    PubMed

    Sjöblom, Johan; Simon, Sébastien; Xu, Zhenghe

    2015-04-01

    Asphalthenes are typically defined as the fraction of petroleum insoluble in n-alkanes (typically heptane, but also hexane or pentane) but soluble in toluene. This fraction causes problems of emulsion formation and deposition/precipitation during crude oil production, processing and transport. From the definition it follows that asphaltenes are not a homogeneous fraction but is composed of molecules polydisperse in molecular weight, structure and functionalities. Their complexity makes the understanding of their properties difficult. Proper model molecules with well-defined structures which can resemble the properties of real asphaltenes can help to improve this understanding. Over the last ten years different research groups have proposed different asphaltene model molecules and studied them to determine how well they can mimic the properties of asphaltenes and determine the mechanisms behind the properties of asphaltenes. This article reviews the properties of the different classes of model compounds proposed and present their properties by comparison with fractionated asphaltenes. After presenting the interest of developing model asphaltenes, the composition and properties of asphaltenes are presented, followed by the presentation of approaches and accomplishments of different schools working on asphaltene model compounds. The presentation of bulk and interfacial properties of perylene-based model asphaltene compounds developed by Sjöblom et al. is the subject of the next part. Finally the emulsion-stabilization properties of fractionated asphaltenes and model asphaltene compounds is presented and discussed.

  6. Biophysical models in hadrontherapy

    NASA Astrophysics Data System (ADS)

    Scholz, M.; Elsaesser, T.

    One major rationale for the application of ion beams in tumor therapy is their increased relative biological effectiveness RBE in the Bragg peak region For dose prescription the increased effectiveness has to be taken into account in treatment planning Hence the complex dependencies of RBE on the dose level biological endpoint position in the field etc require biophysical models which have to fulfill two important criteria simplicity and quantitative precision Simplicity means that the number of free parameters should be kept at a minimum Due to the lack of precise quantitative data at least at present this requirement is incompatible with approaches aiming at the molecular modeling of the whole chain of production processing and repair of biological damages Quantitative precision is required since steep gradients in the dose response curves are observed for most tumor and normal tissues thus even small uncertainties in the estimation of the biologically effective dose can transform into large uncertainties in the clinical outcome The paper will give a general introduction into the field followed by a brief description of a specific model the so called Local Effect Model LEM This model has been successfully applied within treatment planning in the GSI pilot project for carbon ion tumor therapy over almost 10 years now The model is based on the knowledge of charged particle track structure in combination with the response of the biological objects to conventional photon radiation The model will be critically discussed with respect to other

  7. Modeling the transition region

    NASA Astrophysics Data System (ADS)

    Singer, Bart A.

    1994-04-01

    The calculation of engineering flows undergoing laminar-turbulent transition presents special problems. Mean-flow quantities obey neither the fully laminar nor the fully turbulent correlations. In addition, local maxima in skin friction, wall temperature, and heat transfer often occur near the end of the transition region. Traditionally, modeling this region has been important for the design of turbine blades, where the transition region is long in relation to the chord length of the blade. More recently, the need for better transition-region models has been recognized by designers of hypersonic vehicles where the high Mach number, the low Reynolds number, and the low-disturbance flight environment emphasize the importance of the transition region. Needless to say, a model that might work well for the transitional flows typically found in gas turbines will not necessarily work well for the external surface of a hypersonic vehicle. In Section 2 of this report, some of the important flow features that control the transition region will be discussed. In Section 3, different approaches to the modeling problem will be summarized and cataloged. Fully turbulent flow models will be discussed in detail in Section 4; models specifically designed for transitional flow, in Section 5; and the evaluation of models, in Section 6.

  8. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  9. Atmospheric Models for Aerocapture

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta L.; Keller, Vernon W.

    2004-01-01

    There are eight destinations in the solar System with sufficient atmosphere for aerocapture to be a viable aeroassist option - Venus, Earth, Mars, Jupiter, Saturn and its moon Titan, Uranus, and Neptune. Engineering-level atmospheric models for four of these targets (Earth, Mars, Titan, and Neptune) have been developed for NASA to support systems analysis studies of potential future aerocapture missions. Development of a similar atmospheric model for Venus has recently commenced. An important capability of all of these models is their ability to simulate quasi-random density perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithm, and for thermal systems design. Similarities and differences among these atmospheric models are presented, with emphasis on the recently developed Neptune model and on planned characteristics of the Venus model. Example applications for aerocapture are also presented and illustrated. Recent updates to the Titan atmospheric model are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan.

  10. SPAR Model Structural Efficiencies

    SciTech Connect

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  11. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  12. Acute radiation risk models

    NASA Astrophysics Data System (ADS)

    Smirnova, Olga

    Biologically motivated mathematical models, which describe the dynamics of the major hematopoietic lineages (the thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems) in acutely/chronically irradiated humans are developed. These models are implemented as systems of nonlinear differential equations, which variables and constant parameters have clear biological meaning. It is shown that the developed models are capable of reproducing clinical data on the dynamics of these systems in humans exposed to acute radiation in the result of incidents and accidents, as well as in humans exposed to low-level chronic radiation. Moreover, the averaged value of the "lethal" dose rates of chronic irradiation evaluated within models of these four major hematopoietic lineages coincides with the real minimal dose rate of lethal chronic irradiation. The demonstrated ability of the models of the human thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems to predict the dynamical response of these systems to acute/chronic irradiation in wide ranges of doses and dose rates implies that these mathematical models form an universal tool for the investigation and prediction of the dynamics of the major human hematopoietic lineages for a vast pattern of irradiation scenarios. In particular, these models could be applied for the radiation risk assessment for health of astronauts exposed to space radiation during long-term space missions, such as voyages to Mars or Lunar colonies, as well as for health of people exposed to acute/chronic irradiation due to environmental radiological events.

  13. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  14. Spatiochromatic model of vision

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.

    1996-04-01

    A computer model of human spatiochromatic vision, based on the scheme proposed by De Valois and De Valois has been developed. The implementation of the model enables true color 2-D images to be processed. The input consists of cone signals at each pixel. Subsequent levels of the model are represented by arrays of activity corresponding to the equivalent neural activity. The implementation allows the behavior of different stages of the model -- retinal and cortical -- to be studied with different varieties of spatial and chromatic stimuli of any complexity. In addition the model is extensible to allow different types of neural mechanisms and cortical demultiplexing processes to be incorporated. As well as providing qualitative insight into the operation of the different stages of the model the implementation also permits quantitative predictions to be made. Both increment threshold and hue naming results are predicted by the model, but the accuracy of these predictions is contingent upon an appropriate choice of adaptation state at the retinal cone and ganglion cell level.

  15. SMC: SCENIC Model Control

    NASA Technical Reports Server (NTRS)

    Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.

    2015-01-01

    NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in

  16. Turbulence Modeling Workshop

    NASA Technical Reports Server (NTRS)

    Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.

  17. Xenotransplantation Model of Psoriasis.

    PubMed

    Di Domizio, Jeremy; Conrad, Curdin; Gilliet, Michel

    2017-01-01

    Psoriasis is a chronic autoimmune skin disease affecting approximately 2 % of the population with a major psychosocial and socioeconomic impact. A causal therapy leading to permanent cure is not available, and current treatments only lead to limited amelioration, and therefore new therapeutic targets need to be identified. Recent works demonstrated a predominant role of TH17 cells in the pathogenesis of psoriasis; yet the underlying molecular mechanisms driving the development of the disease are still largely elusive. Several mouse models of psoriasis including drug-induced models (topical application of imiquimod to the skin) and genetically engineered mice (constitutive activation of epidermal STAT3, epidermal deletion of JunB/c-Jun, and epidermal overexpression of Tie2) have been used to study the pathophysiology of the disease; however such models cannot fully recapitulate all molecular and cellular pathways occurring in human psoriasis. Xenotransplantation of human pre-psoriatic skin onto immunodeficient mice and triggering its conversion into a psoriatic plaque is the best model to dissect the mechanisms occurring during the development of human psoriasis. One model is based on the transplantation of human pre-psoriatic skin onto SCID mice followed by the transfer of activated autologous T cells. The ex vivo activation of T cells required to induce the psoriatic conversion of the graft limits the study of early events in the pathogenesis of psoriasis. Another model is based on transplantation of human pre-psoriatic skin onto AGR129 mice. In this model, the skin grafting is sufficient to activate human cells contained in the graft and trigger the conversion of the graft into a psoriatic skin, without the need of transferring activated T cells. Here we review the methodological aspects of this model and illustrate how this model can be used to dissect early events of psoriasis pathogenesis.

  18. Global Core Plasma Model

    NASA Technical Reports Server (NTRS)

    Gallagher, Dennis L.; Craven, P. D.; Comfort, R. H.

    1999-01-01

    Abstract. The Global Core Plasma Model (GCPM) provides, empirically derived, core plasma density as a function of geomagnetic and solar conditions throughout the inner magnetosphere. It is continuous in value and gradient and is composed of separate models for the ionosphere, the plasmasphere, the plasmapause, the trough, and the polar cap. The relative composition of plasmaspheric H+, He+, and O+ is included in the GCPM. A blunt plasmaspheric bulge and rotation of the bulge with changing geomagnetic conditions is included. The GCPM is an amalgam of density models, intended to serve as a framework for continued improvement as new measurements become available and are used to characterize core plasma density, composition, and temperature.

  19. Modeling EERE Deployment Programs

    SciTech Connect

    Cort, Katherine A.; Hostick, Donna J.; Belzer, David B.; Livingston, Olga V.

    2007-11-08

    The purpose of this report is to compile information and conclusions gathered as part of three separate tasks undertaken as part of the overall project, “Modeling EERE Deployment Programs,” sponsored by the Planning, Analysis, and Evaluation office within the Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE). The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address improvements to modeling in the near term, and note gaps in knowledge where future research is needed.

  20. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  1. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    Accomplishments are described for the second year effort of a 3-year program to develop methodology for component specific modeling of aircraft engine hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models; (2) geometry model generators; (3) remeshing; (4) specialty 3-D inelastic stuctural analysis; (5) computationally efficient solvers, (6) adaptive solution strategies; (7) engine performance parameters/component response variables decomposition and synthesis; (8) integrated software architecture and development, and (9) validation cases for software developed.

  2. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  3. Residential mobility microsimulation models

    NASA Astrophysics Data System (ADS)

    Wang, Yifei; Wu, Lun

    2010-09-01

    Residential mobility refers to the spatial movement of individuals and households between dwellings within an urban area. This considerable amount of intra-urban movement affects the urban structure and has significant repercussions for urban transportation. In order to understand and project related impacts, a considerable number of residential mobility models has been developed and used in the regional planning process. Within this context, the history and state-of-art residential mobility models are discussed and indicated. Meanwhile, a residential mobility Microsimulation model, called URM-Microsim (Urban Residential Mobility Microsimulation), is introduced and discussed.

  4. Model of detached plasmas

    SciTech Connect

    Yoshikawa, S.; Chance, M.

    1986-07-01

    Recently a tokamak plasma was observed in TFTR that was not limited by a limiter or a divertor. A model is proposed to explain this equilibrium, which is called a detached plasma. The model consists of (1) the core plasma where ohmic heating power is lost by anomalous heat conduction and (2) the shell plasma where the heat from the core plasma is radiated away by the atomic processes of impurity ions. A simple scaling law is proposed to test the validity of this model.

  5. Perspectives on multifield models

    SciTech Connect

    Banerjee, S.

    1997-07-01

    Multifield models for prediction of nuclear reactor thermalhydraulics are reviewed from the viewpoint of their structure and requirements for closure relationships. Their strengths and weaknesses are illustrated with examples, indicating that they are effective in predicting separated and distributed flow regimes, but have problems for flows with large oscillations. Needs for multifield models are also discussed in the context of reactor operations and accident simulations. The highest priorities for future developments appear to relate to closure relationships for three-dimensional multifield models with emphasis on those needed for calculations of phase separation and entrainment/de-entrainment in complex geometries.

  6. Semiempirical models of sunspots

    SciTech Connect

    Sobotka, M.

    1985-10-01

    On the basis of spectroscopic observations in the Mg I b1, Fe I 5434 A, and Na I D2 lines, 12 semiempirical models of sunspots of different sizes (r umbral radius, 2-8 arcsec) are constructed for several stages of their development. It is shown that the model of an umbra varies greatly with an increase in umbral radius up to a limiting value of 3.5-4 arcsec (Su = 7.5 MSH), after which the changes are small, and for a fixed umbral radius there is no significant difference between the models of sunspots in different phases of their development. 16 references.

  7. Chiral models: Geometrical aspects

    NASA Astrophysics Data System (ADS)

    Perelomov, A. M.

    1987-02-01

    Two-dimensional classical chiral models of field theory are considered, the main attention being paid on geometrical aspects of such theories. A characteristic feature of these models is that the interaction is inserted not by adding the interaction Lagrangian to the free field Lagrangian, but has a purely geometrical origin and is related to the inner curvature of the manifold. These models are in many respects analogous to non-Abelian gauge theories and as became clear recently, they are also important for the superstring theory which nowadays is the most probable candidate for a truly unified theory of all interactions including gravitation.

  8. Dynamical model for thyroid

    NASA Astrophysics Data System (ADS)

    Rokni Lamooki, Gholam Reza; Shirazi, Amir H.; Mani, Ali R.

    2015-05-01

    Thyroid's main chemical reactions are employed to develop a mathematical model. The presented model is based on differential equations where their dynamics reflects many aspects of thyroid's behavior. Our main focus here is the well known, but not well understood, phenomenon so called as Wolff-Chaikoff effect. It is shown that the inhibitory effect of intake iodide on the rate of one single enzyme causes a similar effect as Wolff-Chaikoff. Besides this issue, the presented model is capable of revealing other complex phenomena of thyroid hormones homeostasis.

  9. Animal Model of Dermatophytosis

    PubMed Central

    Shimamura, Tsuyoshi; Kubota, Nobuo; Shibuya, Kazutoshi

    2012-01-01

    Dermatophytosis is superficial fungal infection caused by dermatophytes that invade the keratinized tissue of humans and animals. Lesions from dermatophytosis exhibit an inflammatory reaction induced to eliminate the invading fungi by using the host's normal immune function. Many scientists have attempted to establish an experimental animal model to elucidate the pathogenesis of human dermatophytosis and evaluate drug efficacy. However, current animal models have several issues. In the present paper, we surveyed reports about the methodology of the dermatophytosis animal model for tinea corporis, tinea pedis, and tinea unguium and discussed future prospects. PMID:22619489

  10. Deconstructed Higgsless Models

    SciTech Connect

    Casalbuoni, Roberto

    2006-01-12

    We consider the possibility of constructing realistic Higgsless models within the context of deconstructed or moose models. We show that the constraints coming from the electro-weak experimental data are very severe and that it is very difficult to reconcile them with the requirement of improving the unitarity bound of the Higgsless Standard Model. On the other hand, with some fine tuning, a solution is found by delocalizing the standard fermions along the lattice line, that is allowing the fermions to couple to the moose gauge fiel0008.

  11. Stochastic ontogenetic growth model

    NASA Astrophysics Data System (ADS)

    West, B. J.; West, D.

    2012-02-01

    An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.

  12. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  13. Faces of matrix models

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2012-08-01

    Partition functions of eigenvalue matrix models possess a number of very different descriptions: as matrix integrals, as solutions to linear and nonlinear equations, as τ-functions of integrable hierarchies and as special-geometry prepotentials, as result of the action of W-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks. All this explains the central role of such matrix models in modern mathematical physics: they provide the basic "special functions" to express the answers and relations between them, and they serve as a dream model of what one should try to achieve in any other field.

  14. Aviation Safety Simulation Model

    NASA Technical Reports Server (NTRS)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  15. A hierarchy of models for multilane vehicular traffic. 1: Modelling

    SciTech Connect

    Klar, A.; Wegener, R.

    1999-03-01

    In the present paper multilane models for vehicular traffic are considered. A microscopic multilane model based on reaction thresholds is developed. Based on this model an Enskoglike kinetic model is developed. In particular, care is taken to incorporate the correlations between the vehicles. From the kinetic model a fluid dynamic model is derived. The macroscopic coefficients are deduced from the underlying kinetic model. Numerical simulations are presented for all three levels of description. Moreover, a comparison of the results is given there.

  16. The synergy model: the ultimate mentoring model.

    PubMed

    Kerfoot, Karlene M; Cox, Marilyn

    2005-06-01

    Clarian Health Partners is a system that includes Methodist Hospital of Indiana, Indiana University Hospital, and Riley Hospital for Children. The nurses of Clarian Health Partners are the recipients of many national awards for their leadership and innovations in critical care. Nurse leaders at Clarian have developed and implemented a unique framework for professional development based on the synergy model. In this article, the Chief Nurse Executive for the System, Dr. Karlene Kerfoot, and Marilyn Cox, the Senior Vice President for Nursing and Patient Care at Riley Hospital for Children, describe their vision of and strategies for a new approach to mentoring professional nursing staff.

  17. Expert Models and Modeling Processes Associated with a Computer-Modeling Tool

    ERIC Educational Resources Information Center

    Zhang, BaoHui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-01-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using "think aloud" technique…

  18. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  19. X-33 RCS model

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Part of the high pressure nitrogen system used for the 1% scale X-33 reaction control system model. Installed in the Unitary Plan Wind Tunnel for supersonic testing. In building 1251, test section #2.

  20. Green Infrastructure Modeling Tools

    EPA Pesticide Factsheets

    Modeling tools support planning and design decisions on a range of scales from setting a green infrastructure target for an entire watershed to designing a green infrastructure practice for a particular site.

  1. The Negotiation Training Model.

    ERIC Educational Resources Information Center

    Wilkenfeld, Jonathan; Kraus, Sarit; Holley, Kim M.

    1998-01-01

    Discusses decision making and suggests that using simulation techniques based on a sophisticated decision support system facilitates the identification of utility-maximizing strategies. The negotiation training model is described, and preliminary results based on simulation runs are reported. (LRW)

  2. GEOS-5 Modeled Clouds

    NASA Video Gallery

    This visualization shows clouds from a simulation using the Goddard Earth Observing System Model, Verison 5 (GEOS-5). The global atmospheric simulation covers a period from Feb 3, 2010 through Feb ...

  3. Using the Partnership Model

    ERIC Educational Resources Information Center

    Wilks, Bob

    1977-01-01

    Demonstrates how the Partnership Model can be utilized in the real world by showing how it served as a guide during the production of a film on female menopause for the College of Human Medicine. (MH)

  4. Colorado Model Rocketry Workshop.

    ERIC Educational Resources Information Center

    Galindez, Peter

    1978-01-01

    Describes a summer workshop course in rocketry offered to educators and sponsored by industry. The participants built various model rockets and equipment and worked on challenging practical problems and activities. (GA)

  5. Contact dynamics math model

    NASA Technical Reports Server (NTRS)

    Glaese, John R.; Tobbe, Patrick A.

    1986-01-01

    The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.

  6. Materials modelling in London

    NASA Astrophysics Data System (ADS)

    Ciudad, David

    2016-04-01

    Angelos Michaelides, Professor in Theoretical Chemistry at University College London (UCL) and co-director of the Thomas Young Centre (TYC), explains to Nature Materials the challenges in materials modelling and the objectives of the TYC.

  7. Modeling Newspaper Advertising

    ERIC Educational Resources Information Center

    Harper, Joseph; And Others

    1978-01-01

    Presents a mathematical model for simulating a newspaper financial system. Includes the effects of advertising and circulation for predicting advertising linage as a function of population, income, and advertising rate. (RL)

  8. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  9. Media, Metaphors, and Models.

    ERIC Educational Resources Information Center

    Costanzo, William V.

    1988-01-01

    Explores how students are influenced by media technology, specifically television and computers. Notes that media are not just a vehicle of information, but are primarily models of how to see, think, read, write, and reason. (MM)

  10. Modeling collective cell motility

    NASA Astrophysics Data System (ADS)

    Rappel, Wouter-Jan

    Eukaryotic cells often move in groups, a critical aspect of many biological and medical processes including wound healing, morphogenesis and cancer metastasis. Modeling can provide useful insights into the fundamental mechanisms of collective cell motility. Constructing models that incorporate the physical properties of the cells, however, is challenging. Here, I discuss our efforts to build a comprehensive cell motility model that includes cell membrane properties, cell-substrate interactions, cell polarity, and cell-cell interaction. The model will be applied to a variety of systems, including motion on micropatterned substrates and the migration of border cells in Drosophila. This work was supported by NIH Grant No. P01 GM078586 and NSF Grant No. 1068869.

  11. Modelling pulmonary blood flow.

    PubMed

    Tawhai, Merryn H; Burrowes, Kelly S

    2008-11-30

    Computational model analysis has been used widely to understand and interpret complexity of interactions in the pulmonary system. Pulmonary blood transport is a multi-scale phenomenon that involves scale-dependent structure and function, therefore requiring different model assumptions for the microcirculation and the arterial or venous flows. The blood transport systems interact with the surrounding lung tissue, and are dependent on hydrostatic pressure gradients, control of vasoconstriction, and the topology and material composition of the vascular trees. This review focuses on computational models that have been developed to study the different mechanisms contributing to regional perfusion of the lung. Different models for the microcirculation and the pulmonary arteries are considered, including fractal approaches and anatomically-based methods. The studies that are reviewed illustrate the different complementary approaches that can be used to address the same physiological question of flow heterogeneity.

  12. Solar Furnance Model

    ERIC Educational Resources Information Center

    Palmer, Dennis L.; Olsen, Richard W.

    1977-01-01

    Described is how to build a solar furnace model. A detailed list of materials and methods are included along with diagrams. This particular activity is part of an audiotutorial unit concerned with the energy crisis and energy alternatives. (MA)

  13. Radiative transfer models

    NASA Technical Reports Server (NTRS)

    Horwitz, James L.

    1992-01-01

    The purpose of this work was to assist with the development of analytical techniques for the interpretation of infrared observations. We have done the following: (1) helped to develop models for continuum absorption calculations for water vapor in the far infrared spectral region; (2) worked on models for pressure-induced absorption for O2 and N2 and their comparison with available observations; and (3) developed preliminary studies of non-local thermal equilibrium effects in the upper stratosphere and mesosphere for infrared gases. These new techniques were employed for analysis of balloon-borne far infrared data by a group at the Harvard-Smithsonian Center for Astrophysics. The empirical continuum absorption model for water vapor in the far infrared spectral region and the pressure-induced N2 absorption model were found to give satisfactory results in the retrieval of the mixing ratios of a number of stratospheric trace constituents from balloon-borne far infrared observations.

  14. Refining climate models

    SciTech Connect

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2012-10-31

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  15. Modeling Infectious Diseases

    MedlinePlus

    ... MIDAS models require a breadth of knowledge, the network draws together an interdisciplinary team of researchers with expertise in epidemiology, infectious diseases, computational biology, statistics, social sciences, physics, computer sciences and informatics. ...

  16. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  17. Orsted Initial Field Model

    NASA Technical Reports Server (NTRS)

    Olsen, N.; Holme, R.; Hulot, G.; Sabaka, T.; Neubert, T.; Toffner-Clausen, L.; Primdahl, F.; Jorgensen, J.; Leger, J.-M.; Barraclough, D.; Smith, David E. (Technical Monitor)

    2000-01-01

    Magnetic measurements taken by the Orsted satellite during geomagnetic quiet conditions around January 1, 2000 have been used to derive a spherical harmonic model of the Earth's magnetic field for epoch 2000.0. The maximum degree and order of the model is 19 for internal, and 2 for external, source fields; however, coefficients above degree 14 may not be robust. Such detailed models exist for only one previous epoch, 1980. Achieved rms misfit is 2 nT for the scalar intensity and 4 nT for the vector components perpendicular to the magnetic field. This model is of higher detail than the IGRF 2000, which for scientific purposes related to the Orsted mission it supersedes.

  18. Modeling Viral Capsid Assembly

    PubMed Central

    2014-01-01

    I present a review of the theoretical and computational methodologies that have been used to model the assembly of viral capsids. I discuss the capabilities and limitations of approaches ranging from equilibrium continuum theories to molecular dynamics simulations, and I give an overview of some of the important conclusions about virus assembly that have resulted from these modeling efforts. Topics include the assembly of empty viral shells, assembly around single-stranded nucleic acids to form viral particles, and assembly around synthetic polymers or charged nanoparticles for nanotechnology or biomedical applications. I present some examples in which modeling efforts have promoted experimental breakthroughs, as well as directions in which the connection between modeling and experiment can be strengthened. PMID:25663722

  19. Fluidized bed combustor modeling

    NASA Technical Reports Server (NTRS)

    Horio, M.; Rengarajan, P.; Krishnan, R.; Wen, C. Y.

    1977-01-01

    A general mathematical model for the prediction of performance of a fluidized bed coal combustor (FBC) is developed. The basic elements of the model consist of: (1) hydrodynamics of gas and solids in the combustor; (2) description of gas and solids contacting pattern; (3) kinetics of combustion; and (4) absorption of SO2 by limestone in the bed. The model is capable of calculating the combustion efficiency, axial bed temperature profile, carbon hold-up in the bed, oxygen and SO2 concentrations in the bubble and emulsion phases, sulfur retention efficiency and particulate carry over by elutriation. The effects of bed geometry, excess air, location of heat transfer coils in the bed, calcium to sulfur ratio in the feeds, etc. are examined. The calculated results are compared with experimental data. Agreement between the calculated results and the observed data are satisfactory in most cases. Recommendations to enhance the accuracy of prediction of the model are suggested.

  20. Refining climate models

    ScienceCinema

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2016-07-12

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  1. Supersymmetric sigma models

    SciTech Connect

    Bagger, J.A.

    1984-09-01

    We begin to construct the most general supersymmetric Lagrangians in one, two and four dimensions. We find that the matter couplings have a natural interpretation in the language of the nonlinear sigma model.

  2. LSST telescope modeling overview

    NASA Astrophysics Data System (ADS)

    Sebag, J.; Andrew, J.; Angeli, G.; Araujo, C.; Barr, J.; Callahan, S.; Cho, M.; Claver, C.; Daruich, F.; Gressler, W.; Hileman, E.; Liang, M.; Muller, G.; Neill, D.; Schoening, W.; Warner, M.; Wiecha, O.; Xin, B.; Orden Martinez, Alfredo; Perezagua Aguado, Manuel; García Marchena, Luis; Ruiz de Argandoña, Ismael

    2016-08-01

    During this early stage of construction of the Large Synoptic Survey Telescope (LSST), modeling has become a crucial system engineering process to ensure that the final detailed design of all the sub-systems that compose the telescope meet requirements and interfaces. Modeling includes multiple tools and types of analyses that are performed to address specific technical issues. Three-dimensional (3D) Computeraided Design (CAD) modeling has become central for controlling interfaces between subsystems and identifying potential interferences. The LSST Telescope dynamic requirements are challenging because of the nature of the LSST survey which requires a high cadence of rapid slews and short settling times. The combination of finite element methods (FEM), coupled with control system dynamic analysis, provides a method to validate these specifications. An overview of these modeling activities is reported in this paper including specific cases that illustrate its impact.

  3. Modeling Sustainment Investment

    DTIC Science & Technology

    2015-05-01

    2015 Carnegie Mellon University Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 Modeling Sustainment ... Sustainment Investment 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 Modeling Sustainment Investment May 2015

  4. Population Propensity Measurement Model

    DTIC Science & Technology

    1993-12-01

    school DQ702 Taken elementary algebra DQ703 Taken plane geometry DQ70 Taken computer science DQ706 Taken intermediate algebra DQ707 Taken trigonometry ...with separate models for distributing the arrival of applicants over FY’s, quarters, or months. The primary obstacle in these models is shifting the...to ŕ" = Otherwise DQ706 Binary: 1 = Taken intermediate Q706 is equal to ŕ" algebra, 0 = Otherwise DQ707 Binary: 1 = Taken trigonometry , 0 = Q707 is

  5. Model Valid Prediction Period

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2002-12-01

    A new concept, valid prediction period (VPP), is presented here to evaluate model predictability. VPP is defined as the time period when the prediction error first exceeds a pre-determined criterion (i.e., the tolerance level). It depends not only on the instantaneous error growth, but also on the noise level, the initial error, and tolerance level. The model predictability skill is then represented by a single scalar, VPP. The longer the VPP, the higher the model predictability skill is. A theoretical framework on the base of the backward Fokker-Planck equation is developed to determine the probability density function (pdf) of VPP. Verification of a Gulf of Mexico nowcast/forecast model is used as an example to demonstrate the usefulness of VPP. Power law scaling is found in the mean square error of displacement between drifting buoy and model trajectories (both at 50 m depth). The pdf of VPP is asymmetric with a long and broad tail on the higher value side, which suggests long-term predictability. The calculations demonstrate that the long-term (extreme long such as 50-60 day) predictability is not an "outlier" and shares the same statistical properties as the short-term predictions. References Chu P. C., L. M. Ivanov, and C.W. Fan, Backward Fokker-Plank equation for determining model predictability with unknown initial error distribution. J. Geophys. Res., in press, 2002. Chu P.C., L.M.Ivanov, T.M. Margolina, and O.V.Melnichenko, 2002b: On probabilistic stability of an atmospheric model to various amplitude perturbations. J. Atmos. Sci., in press Chu P.C., L.M. Ivanov, L. Kantha, O.V. Melnichenko and Y.A. Poberezhny, 2002c: The long-term correlations and power decay law in model prediction skill. Geophys. Res. Let., in press.

  6. Argentina corn yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    A model based on multiple regression was developed to estimate corn yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the corn-growing area. Predictor variables for the model were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. A trend variable was included for the years 1965 to 1980 since an increasing trend in yields due to technology was observed between these years.

  7. Integrated Modeling Systems

    DTIC Science & Technology

    1989-01-01

    Summer 1979). WMSI Working Paper No. 291A. 173 Dyer , J. and R. Sarin. "Measurable Multiattribute Value Functions," Operations Research. 27:4 (July...J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI Working Paper No. 274...model integration, solver integration, and integration of various utilities . Model integration is further divided into four subtypes based on a four-level

  8. Analytic Modeling of Insurgencies

    DTIC Science & Technology

    2014-08-01

    influenced by interests and utilities. 4.1 Carrots and Sticks An analytic model that captures the aforementioned utilitarian aspect is presented in... carrots ” x. A dynamic utility-based model is developed in [26] in which the state variables are the fractions of contrarians (supporters of the...Unanticipated Political Revolution," Public Choice, vol. 61, pp. 41-74, 1989. [26] M. P. Atkinson, M. Kress and R. Szechtman, " Carrots , Sticks and Fog

  9. Prevalence Incidence Mixture Models

    Cancer.gov

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, stratified sampling, and two-phase stratified sampling. Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  10. Applied Reverberation Modeling Workshop

    DTIC Science & Technology

    2011-09-01

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...Later, Zhou [3] and others extended their work to reverberation. ASPM/ ASTRAL agreed reasonably well with the energy flux model reverberation and...simulation and training requirements. RELATED PROJECTS The ONR-SPAWAR Reverberation Modeling Workshop is a closely related project that was intended

  11. Applied Reverberation Modeling Workshop

    DTIC Science & Technology

    2010-01-01

    Reverberation Modeling Workshop 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...included both propagation and reverberation with the receiver on a transect perpendicular to the wedge (3-D effects were ignored). The ASTRAL /ASPM...by the PI. The interest was in determining how the operational models ( ASTRAL /ASPM) performed in a sloping environment. ASTRAL is extremely fast

  12. Models of multiquark states

    SciTech Connect

    Lipkin, H.J.

    1986-01-01

    The success of simple constituent quark models in single-hardon physics and their failure in multiquark physics is discussed, emphasizing the relation between meson and baryon spectra, hidden color and the color matrix, breakup decay modes, coupled channels, and hadron-hadron interactions via flipping and tunneling of flux tubes. Model-independent predictions for possible multiquark bound states are considered and the most promising candidates suggested. A quark approach to baryon-baryon interactions is discussed.

  13. Argentina wheat yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    Five models based on multiple regression were developed to estimate wheat yields for the five wheat growing provinces of Argentina. Meteorological data sets were obtained for each province by averaging data for stations within each province. Predictor variables for the models were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. Buenos Aires was the only province for which a trend variable was included because of increasing trend in yield due to technology from 1950 to 1963.

  14. CO2 laser modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Barry

    1992-01-01

    The topics covered include the following: (1) CO2 laser kinetics modeling; (2) gas lifetimes in pulsed CO2 lasers; (3) frequency chirp and laser pulse spectral analysis; (4) LAWS A' Design Study; and (5) discharge circuit components for LAWS. The appendices include LAWS Memos, computer modeling of pulsed CO2 lasers for lidar applications, discharge circuit considerations for pulsed CO2 lidars, and presentation made at the Code RC Review.

  15. Plutonium Storage Model

    SciTech Connect

    Krupa, J.F.

    2001-01-25

    An EXTEND4/SDI-Industry model has been created which can easily accommodate changes in scenarios by changing input parameters. It matches well with hand crafted spreadsheet analyses, and has the advantage that it shows system logic and can be documented. The output of the model for a given case is shown in Figure E-1. The comparable hand crafted spreadsheet version is shown in Figure E-2.

  16. Argentina soybean yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    A model based on multiple regression was developed to estimate soybean yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the soybean growing area. Predictor variables for the model were derived from monthly total precipitation and monthly average temperature. A trend variable was included for the years 1969 to 1978 since an increasing trend in yields due to technology was observed between these years.

  17. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  18. Modeling of Particulate Emissions

    DTIC Science & Technology

    2011-12-01

    coagulation oxidation.... carbonization 14 Modeling Particulate Emissions Soot Formation Kinetics 2 1016 1 ]HC[kdt dS = Inception: Dimerization of...simulated with peak size for surface growth Sectional Conservation Equation 16 Modeling Particulate Emissions Soot Kinetics Based on OH, O2 and...and empirical tuning to NOx, CO emissionsFuel-spray shear layer Recirculation zones Quench zones Burn-out zones Full set of reaction kinetics and

  19. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  20. Primary health care models

    PubMed Central

    Brown, Judith Belle; French, Reta; McCulloch, Amy; Clendinning, Eric

    2012-01-01

    Abstract Objective To explore the knowledge and perceptions of fourth-year medical students regarding the new models of primary health care (PHC) and to ascertain whether that knowledge influenced their decisions to pursue careers in family medicine. Design Qualitative study using semistructured interviews. Setting The Schulich School of Medicine and Dentistry at The University of Western Ontario in London. Participants Fourth-year medical students graduating in 2009 who indicated family medicine as a possible career choice on their Canadian Residency Matching Service applications. Methods Eleven semistructured interviews were conducted between January and April of 2009. Data were analyzed using an iterative and interpretive approach. The analysis strategy of immersion and crystallization assisted in synthesizing the data to provide a comprehensive view of key themes and overarching concepts. Main findings Four key themes were identified: the level of students’ knowledge regarding PHC models varied; the knowledge was generally obtained from practical experiences rather than classroom learning; students could identify both advantages and disadvantages of working within the new PHC models; and although students regarded the new PHC models positively, these models did not influence their decisions to pursue careers in family medicine. Conclusion Knowledge of the new PHC models varies among fourth-year students, indicating a need for improved education strategies in the years before clinical training. Being able to identify advantages and disadvantages of the PHC models was not enough to influence participants’ choice of specialty. Educators and health care policy makers need to determine the best methods to promote and facilitate knowledge transfer about these PHC models. PMID:22518904

  1. Theory Modeling and Simulation

    SciTech Connect

    Shlachter, Jack

    2012-08-23

    Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.

  2. Distributed generation systems model

    SciTech Connect

    Barklund, C.R.

    1994-12-31

    A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.

  3. Numerical Modeling of Airblast.

    DTIC Science & Technology

    1987-06-01

    REPORT SAIC 87/1701 June 1987 Dr.. Submitted to: cp Dr. Jay Boris Laboratory for Computational Physics Accet F4,r Naval Research Laboratory I...boundary layer physical assumptions provides an unsteady prediction of the mass flux emerging from the ground. This model was first proposed by Mirels...the physics modeled will be explained. High explosive dust cloud simulation provides a research path when combined with numerical calculations can lead

  4. Spin models as microfoundation of macroscopic market models

    NASA Astrophysics Data System (ADS)

    Krause, Sebastian M.; Bornholdt, Stefan

    2013-09-01

    Macroscopic price evolution models are commonly used for investment strategies. There are first promising achievements in defining microscopic agent based models for the same purpose. Microscopic models allow a deeper understanding of mechanisms in the market than the purely phenomenological macroscopic models, and thus bear the chance for better models for market regulation. However microscopic models and macroscopic models are commonly studied separately. Here, we exemplify a unified view of a microscopic and a macroscopic market model in a case study, deducing a macroscopic Langevin equation from a microscopic spin market model closely related to the Ising model. The interplay of the microscopic and the macroscopic view allows for a better understanding and adjustment of the microscopic model, as well, and may guide the construction of agent based market models as basis of macroscopic models.

  5. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  6. Modeling electronegative plasma discharge

    SciTech Connect

    Lichtenberg, A.J.; Lieberman, M.A.

    1995-12-31

    Macroscopic analytic models for a three-component electronegative gas discharge are developed. Assuming the negative ions to be in Boltzmann equilibrium, a positive ion ambipolar diffusion equation is derived. The discharge consists of an electronegative core and electropositive edges. The electron density in the core is nearly uniform, allowing a parabolic approximation to the plasma profile to be employed. The resulting equilibrium equations are solved analytically and matched to a constant mobility transport model of an electropositive edge plasma. The solutions are compared to a simulation of a parallel-plane r.f. driven oxygen plasma for p = 50 mTorr and n{sub eo}= 2.4 x 10{sup 15} m{sup -3}. The ratio {alpha}{sub o} of central negative ion density to electron density, and the electron temperature T{sub e}, found in the simulation, are in reasonable agreement with the values calculated from the model. The model is extended to: (1) low pressures, where a variable mobility model is used in the electropositive edge region; and (2) high {alpha}{sub o} in which the edge region disappears. The inclusion of a second positive ion species, which can be very important in describing electronegative discharges used for materials processing, is a possible extension of the model.

  7. Causal Rasch models

    PubMed Central

    Stenner, A. Jackson; Fisher, William P.; Stone, Mark H.; Burdick, Donald S.

    2013-01-01

    Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained. PMID:23986726

  8. Nonparametric Streamflow Disaggregation Model

    NASA Astrophysics Data System (ADS)

    Lee, T.; Salas, J. D.; Prairie, J. R.

    2009-05-01

    Stochastic streamflow generation is generally utilized for planning and management of water resources systems. For this purpose a number of parametric and nonparametric modeling alternatives have been suggested in literature. Among them temporal and spatial disaggregation approaches play an important role particularly to make sure that historical variance-covariance properties are preserved at various temporal and spatial scales. In this paper, we review the underlying features of nonparametric disaggregation, identify some of their pros and cons, and propose a disaggregation algorithm that is capable of surmounting some of the shortcoming of the current models. The proposed models hinge on k-nearest neighbor resampling, the accurate adjusting procedure, and a genetic algorithm. The model has been tested and compared to an existing nonparametric disaggregation approach using data of the Colorado River system. It has been shown that the model is capable of (i) reproducing the season-to-season correlations including the correlation between the last season of the previous year and the first season of the current year, (ii) minimizing or avoiding the generation of flow patterns across the year that are literally the same as those of the historical records, and (iii) minimizing or avoiding the generation of negative flows. In addition, it is applicable to intermittent river regimes. Suggestions for further improving the model are discussed.

  9. General composite Higgs models

    NASA Astrophysics Data System (ADS)

    Marzocca, David; Serone, Marco; Shu, Jing

    2012-08-01

    We construct a general class of pseudo-Goldstone composite Higgs models, within the minimal SO(5)/SO(4) coset structure, that are not necessarily of moose-type. We characterize the main properties these models should have in order to give rise to a Higgs mass around 125 GeV. We assume the existence of relatively light and weakly coupled spin 1 and 1/2 resonances. In absence of a symmetry principle, we introduce the Minimal Higgs Potential (MHP) hypothesis: the Higgs potential is assumed to be one-loop dominated by the SM fields and the above resonances, with a contribution that is made calculable by imposing suitable generalizations of the first and second Weinberg sum rules. We show that a 125 GeV Higgs requires light, often sub-TeV, fermion resonances. Their presence can also be important for the models to successfully pass the electroweak precision tests. Interestingly enough, the latter can also be passed by models with a heavy Higgs around 320 GeV. The composite Higgs models of the moose-type considered in the literature can be seen as particular limits of our class of models.

  10. Modeling of transitional flows

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1988-01-01

    An effort directed at developing improved transitional models was initiated. The focus of this work was concentrated on the critical assessment of a popular existing transitional model developed by McDonald and Fish in 1972. The objective of this effort was to identify the shortcomings of the McDonald-Fish model and to use the insights gained to suggest modifications or alterations of the basic model. In order to evaluate the transitional model, a compressible boundary layer code was required. Accordingly, a two-dimensional compressible boundary layer code was developed. The program was based on a three-point fully implicit finite difference algorithm where the equations were solved in an uncoupled manner with second order extrapolation used to evaluate the non-linear coefficients. Iteration was offered as an option if the extrapolation error could not be tolerated. The differencing scheme was arranged to be second order in both spatial directions on an arbitrarily stretched mesh. A variety of boundary condition options were implemented including specification of an external pressure gradient, specification of a wall temperature distribution, and specification of an external temperature distribution. Overall the results of the initial phase of this work indicate that the McDonald-Fish model does a poor job at predicting the details of the turbulent flow structure during the transition region.

  11. Checker Board Model

    NASA Astrophysics Data System (ADS)

    Lach, Theodore

    2008-04-01

    The Checker Board Model (CBM) is a 2D model of the nucleus that proposes that the synchronization of two outer rotating quarks in the nucleons accounts for magnetic moment of the nucleons and that the resulting magnetic flux couples (weaves) into the 2D checker board array structures and this 2D magnetic coupling in addition to electrostatic forces of the two rotating and one stationary quark accounts for the apparent strong nuclear force. The symmetry of the He nucleus helps explain why this 2D structure is stable. This model explain the mass of the proton and neutron, along with their magnetic moments and their absolute and relative sizes and predict the masses of two newly proposed quarks ^(1): the ``up'' and the ``dn'' quarks. Since the masses of the ``up'' and ``dn'' quark determined by the CBM (237.31 MeV and 42.392 MeV respectively) did not fit within the standard model as candidates for u and d, a new model (New Physics) had to be invented. The details of this new nuclear physics model can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/

  12. Checker Board Model

    NASA Astrophysics Data System (ADS)

    Lach, Theodore

    2009-05-01

    The Checker Board Model (CBM) is a 2D model of the nucleus that proposes that the synchronization of two outer rotating quarks in the nucleons accounts for magnetic moment of the nucleons and that the resulting magnetic flux couples (weaves) into the 2D checker board array structures and this 2D magnetic coupling in addition to electrostatic forces of the two rotating and one stationary quark accounts for the apparent strong nuclear force. The symmetry of the He nucleus helps explain why this 2D structure is stable. This model explain the mass of the proton and neutron, along with their magnetic moments and their absolute and relative sizes and predict the masses of two newly proposed quarks ^(1): the ``up'' and the ``dn'' quarks. Since the masses of the ``up'' and ``dn'' quark determined by the CBM (237.31 MeV and 42.392 MeV respectively) did not fit within the standard model as candidates for u and d, a new model (New Physics) had to be invented. The details of this new nuclear physics model can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/

  13. Checker Board Model

    NASA Astrophysics Data System (ADS)

    Lach, Thedore

    2007-04-01

    The Checker Board Model (CBM) is a 2D model of the nucleus that proposes that the synchronization of two outer rotating quarks in the nucleons accounts for magnetic moment of the nucleons and that the resulting magnetic flux couples (weaves) into the 2D checker board array structures and this 2D magnetic coupling in addition to electrostatic forces of the two rotating and one stationary quark accounts for the apparent strong nuclear force. The symmetry of the He nucleus helps explain why this 2D structure is stable. This model explain the mass of the proton and neutron, along with their magnetic moments and their absolute and relative sizes and predict the masses of two newly proposed quarks ^(1): the ``up'' and the ``dn'' quarks. Since the masses of the ``up'' and ``dn'' quark determined by the CBM (237.31 MeV and 42.392 MeV respectively) did not fit within the standard model as candidates for u and d, a new model (New Physics) had to be invented. The details of this new nuclear physics model can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/

  14. The MASTER-2001 model

    NASA Astrophysics Data System (ADS)

    Bendisch, J.; Bunte, K.; Klinkrad, H.; Krag, H.; Martin, C.; Sdunnus, H.; Walker, R.; Wegener, P.; Wiedemann, C.

    2004-01-01

    Meteoroid and Space Debris Terrestrial Reference model (MASTER) is the European particulate environment and risk assessment model. It is based on quasi-deterministic principles, using comprehensive orbit propagation theories and volume discretisation techniques, to derive spatial density and velocity distributions in a three-dimensional control volume ranging from LEO to GEO altitudes. The new release, MASTER-2001, incorporates new modelling and validation approaches and enables the calculation of fluxes on targets operating between the year 1957 and 2050, using detailed simulation results. This could be achieved by using not only the POEM simulation tool for the past to present debris populations, but also applying the long term prediction tool DELTA to obtain future populations. The paper describes the features and results of the MASTER-2001 model, and the updated modelling approach (e.g., the use of a new fragmentation model). The historical and future evolution of the space debris environment in terms of spatial density and object fluxes as given by MASTER-2001 are presented and discussed.

  15. Learning planar ising models

    SciTech Connect

    Johnson, Jason K; Chertkov, Michael; Netrapalli, Praneeth

    2010-11-12

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus our attention on the class of planar Ising models, for which inference is tractable using techniques of statistical physics [Kac and Ward; Kasteleyn]. Based on these techniques and recent methods for planarity testing and planar embedding [Chrobak and Payne], we propose a simple greedy algorithm for learning the best planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. We present the results of numerical experiments evaluating the performance of our algorithm.

  16. Dynamic causal modelling revisited.

    PubMed

    Friston, K J; Preller, Katrin H; Mathys, Chris; Cagnan, Hayriye; Heinzle, Jakob; Razi, Adeel; Zeidman, Peter

    2017-02-17

    This paper revisits the dynamic causal modelling of fMRI timeseries by replacing the usual (Taylor) approximation to neuronal dynamics with a neural mass model of the canonical microcircuit. This provides a generative or dynamic causal model of laminar specific responses that can generate haemodynamic and electrophysiological measurements. In principle, this allows the fusion of haemodynamic and (event related or induced) electrophysiological responses. Furthermore, it enables Bayesian model comparison of competing hypotheses about physiologically plausible synaptic effects; for example, does attentional modulation act on superficial or deep pyramidal cells - or both? In this technical note, we describe the resulting dynamic causal model and provide an illustrative application to the attention to visual motion dataset used in previous papers. Our focus here is on how to answer long-standing questions in fMRI; for example, do haemodynamic responses reflect extrinsic (afferent) input from distant cortical regions, or do they reflect intrinsic (recurrent) neuronal activity? To what extent do inhibitory interneurons contribute to neurovascular coupling? What is the relationship between haemodynamic responses and the frequency of induced neuronal activity? This paper does not pretend to answer these questions; rather it shows how they can be addressed using neural mass models of fMRI timeseries.

  17. Extended chameleon models

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Tamanini, Nicola

    2016-05-01

    We extend the chameleon models by considering scalar-fluid theories where the coupling between matter and the scalar field can be represented by a quadratic effective potential with density-dependent minimum and mass. In this context, we study the effects of the scalar field on Solar System tests of gravity and show that models passing these stringent constraints can still induce large modifications of Newton's law on galactic scales. On these scales we analyze models which could lead to a percent deviation of Newton's law outside the virial radius. We then model the dark matter halo as a Navarro-Frenk-White profile and explicitly find that the fifth force can give large contributions around the galactic core in a particular model where the scalar field mass is constant and the minimum of its potential varies linearly with the matter density. At cosmological distances, we find that this model does not alter the growth of large scale structures and therefore would be best tested on galactic scales, where interesting signatures might arise in the galaxy rotation curves.

  18. Prognostic models in melanoma.

    PubMed

    Halpern, A C; Schuchter, L M

    1997-02-01

    Predicting which patients with primary melanoma are at risk of developing metastastic disease is important for making rational therapeutic decisions. Tumor thickness alone is the most commonly used predictor of survival, but other clinical and pathologic variables also play an important role. We have developed two multivariate logistic regression models to predict survival in patients who have primary melanoma. The first of these models assigns patients to two groups based on radial or vertical growth phase. The probability of survival for those patients with vertical growth phase tumors was further determined based on a model using six variables (mitotic rate, tumor infiltrating lymphocytes, tumor thickness, anatomic site of the primary tumor, sex, and histologic regression) that have the greatest strength as independent predictors of survival. This model is 89% accurate for predicting survival in patients with vertical growth phase tumors. A second model has been developed that uses readily available clinical parameters to predict survival. Four variables (tumor thickness, anatomic site, age, and sex) entered into the model as powerful independent predictors. Clinical algorithms for assessing patient risk are provided.

  19. Animal models of sarcoidosis.

    PubMed

    Hu, Yijie; Yibrehu, Betel; Zabini, Diana; Kuebler, Wolfgang M

    2017-03-01

    Sarcoidosis is a debilitating, inflammatory, multiorgan, granulomatous disease of unknown cause, commonly affecting the lung. In contrast to other chronic lung diseases such as interstitial pulmonary fibrosis or pulmonary arterial hypertension, there is so far no widely accepted or implemented animal model for this disease. This has hampered our insights into the etiology of sarcoidosis, the mechanisms of its pathogenesis, the identification of new biomarkers and diagnostic tools and, last not least, the development and implementation of novel treatment strategies. Over past years, however, a number of new animal models have been described that may provide useful tools to fill these critical knowledge gaps. In this review, we therefore outline the present status quo for animal models of sarcoidosis, comparing their pros and cons with respect to their ability to mimic the etiological, clinical and histological hallmarks of human disease and discuss their applicability for future research. Overall, the recent surge in animal models has markedly expanded our options for translational research; however, given the relative early stage of most animal models for sarcoidosis, appropriate replication of etiological and histological features of clinical disease, reproducibility and usefulness in terms of identification of new therapeutic targets and biomarkers, and testing of new treatments should be prioritized when considering the refinement of existing or the development of new models.

  20. Climate and atmospheric modeling studies

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.