Sample records for initial model runs

  1. Defensive Swarm: An Agent Based Modeling Analysis

    DTIC Science & Technology

    2017-12-01

    INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43  1.  Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances

  2. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Bed composition generation for morphodynamic modeling: Case study of San Pablo Bay in California, USA

    USGS Publications Warehouse

    van der Wegen, M.; Dastgheib, A.; Jaffe, B.E.; Roelvink, D.

    2011-01-01

    Applications of process-based morphodynamic models are often constrained by limited availability of data on bed composition, which may have a considerable impact on the modeled morphodynamic development. One may even distinguish a period of "morphodynamic spin-up" in which the model generates the bed level according to some ill-defined initial bed composition rather than describing the realistic behavior of the system. The present paper proposes a methodology to generate bed composition of multiple sand and/or mud fractions that can act as the initial condition for the process-based numerical model Delft3D. The bed composition generation (BCG) run does not include bed level changes, but does permit the redistribution of multiple sediment fractions over the modeled domain. The model applies the concept of an active layer that may differ in sediment composition above an underlayer with fixed composition. In the case of a BCG run, the bed level is kept constant, whereas the bed composition can change. The approach is applied to San Pablo Bay in California, USA. Model results show that the BCG run reallocates sand and mud fractions over the model domain. Initially, a major sediment reallocation takes place, but development rates decrease in the longer term. Runs that take the outcome of a BCG run as a starting point lead to more gradual morphodynamic development. Sensitivity analysis shows the impact of variations in the morphological factor, the active layer thickness, and wind waves. An important but difficult to characterize criterion for a successful application of a BCG run is that it should not lead to a bed composition that fixes the bed so that it dominates the "natural" morphodynamic development of the system. Future research will focus on a decadal morphodynamic hindcast and comparison with measured bathymetries in San Pablo Bay so that the proposed methodology can be tested and optimized. ?? 2010 The Author(s).

  4. Toward Improved Land Surface Initialization in Support of Regional WRF Forecasts at the Kenya Meteorological Department

    NASA Technical Reports Server (NTRS)

    Case. Jonathan; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.

    2014-01-01

    Flooding and drought are two key forecasting challenges for the Kenya Meteorological Department (KMD). Atmospheric processes leading to excessive precipitation and/or prolonged drought can be quite sensitive to the state of the land surface, which interacts with the boundary layer of the atmosphere providing a source of heat and moisture. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface within weakly-sheared environments, such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in numerical weather prediction models. Enhanced regional modeling capabilities have the potential to improve forecast guidance in support of daily operations and high-end events over east Africa. KMD currently runs a configuration of the Weather Research and Forecasting (WRF) model in real time to support its daily forecasting operations, invoking the Nonhydrostatic Mesoscale Model (NMM) dynamical core. They make use of the National Oceanic and Atmospheric Administration / National Weather Service Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the WRF-NMM model runs on a 7-km regional grid over eastern Africa. Two organizations at the National Aeronautics and Space Administration Marshall Space Flight Center in Huntsville, AL, SERVIR and the Short-term Prediction Research and Transition (SPoRT) Center, have established a working partnership with KMD for enhancing its regional modeling capabilities. To accomplish this goal, SPoRT and SERVIR will provide experimental land surface initialization datasets and model verification capabilities to KMD. To produce a land-surface initialization more consistent with the resolution of the KMD-WRF runs, the NASA Land Information System (LIS) will be run at a comparable resolution to provide real-time, daily soil initialization data in place of interpolated Global Forecast System soil moisture and temperature data. Additionally, real-time green vegetation fraction data from the Visible Infrared Imaging Radiometer Suite will be incorporated into the KMD-WRF runs, once it becomes publicly available from the National Environmental Satellite Data and Information Service. Finally, model verification capabilities will be transitioned to KMD using the Model Evaluation Tools (MET) package, in order to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. The transition of these MET tools will enable KMD to monitor model forecast accuracy in near real time. This presentation will highlight preliminary verification results of WRF runs over east Africa using the LIS land surface initialization.

  5. NBER working paper series: oil and the dollar. Working Paper No. 554

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krugman, P.

    1980-01-01

    This paper develops a simple theoretical model of the effect of an oil price increase on exchange rates. The model shows that the direction of this effect depends on a comparison of the direct balance of payments burden of the higher oil price with the indirect balance of payments benefits of OPEC spending and investment. In the short run, what matters is whether the US share of world oil imports is more or less than its share of OPEC asset holdings; in the long run, whether its share of oil imports is more or less than its share of OPECmore » imports. Casual empiricism suggests that the initial effect and the long run effect will run in opposite directions; an oil price increase will initially lead to dollar appreciation, but eventually leads to dollar depreciation.« less

  6. Impact of Targeted Ocean Observations for Improving Ocean Model Initialization for Coupled Hurricane Forecasting

    NASA Astrophysics Data System (ADS)

    Halliwell, G. R.; Srinivasan, A.; Kourafalou, V. H.; Yang, H.; Le Henaff, M.; Atlas, R. M.

    2012-12-01

    The accuracy of hurricane intensity forecasts produced by coupled forecast models is influenced by errors and biases in SST forecasts produced by the ocean model component and the resulting impact on the enthalpy flux from ocean to atmosphere that powers the storm. Errors and biases in fields used to initialize the ocean model seriously degrade SST forecast accuracy. One strategy for improving ocean model initialization is to design a targeted observing program using airplanes and in-situ devices such as floats and drifters so that assimilation of the additional data substantially reduces errors in the ocean analysis system that provides the initial fields. Given the complexity and expense of obtaining these additional observations, observing system design methods such as OSSEs are attractive for designing efficient observing strategies. A new fraternal-twin ocean OSSE system based on the HYbrid Coordinate Ocean Model (HYCOM) is used to assess the impact of targeted ocean profiles observed by hurricane research aircraft, and also by in-situ float and drifter deployments, on reducing errors in initial ocean fields. A 0.04-degree HYCOM simulation of the Gulf of Mexico is evaluated as the nature run by determining that important ocean circulation features such as the Loop Current and synoptic cyclones and anticyclones are realistically simulated. The data-assimilation system is run on a 0.08-degree HYCOM mesh with substantially different model configuration than the nature run, and it uses a new ENsemble Kalman Filter (ENKF) algorithm optimized for the ocean model's hybrid vertical coordinates. The OSSE system is evaluated and calibrated by first running Observing System Experiments (OSEs) to evaluate existing observing systems, specifically quantifying the impact of assimilating more than one satellite altimeter, and also the impact of assimilating targeted ocean profiles taken by the NOAA WP-3D hurricane research aircraft in the Gulf of Mexico during the Deepwater Horizon oil spill. OSSE evaluation and calibration is then performed by repeating these two OSEs with synthetic observations and comparing the resulting observing system impact to determine if it differs from the OSE results. OSSEs are first run to evaluate different airborne sampling strategies with respect to temporal frequency of flights and the horizontal separation of upper-ocean profiles during each flight. They are then run to assess the impact of releasing multiple floats and gliders. Evaluation strategy focuses on error reduction in fields important for hurricane forecasting such as the structure of ocean currents and eddies, upper ocean heat content distribution, and upper-ocean stratification.

  7. Incorporation of cooling-induced crystallization into a 2-dimensional axisymmetric conduit heat flow model

    NASA Astrophysics Data System (ADS)

    Heptinstall, David; Bouvet de Maisonneuve, Caroline; Neuberg, Jurgen; Taisne, Benoit; Collinson, Amy

    2016-04-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic 3 systems. We shall investigate the thermal processes and timescales in a crystallizing, static 4 magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/° C (runs 1 & 3) and 0.2MPa/° C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69E5 J/kg*K, 9.32E5 J/kg*K, and 9.49E5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the centre of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10m depth, it takes 4.1-9.2 years for the magma column to cool by 108-131oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and the dominant latent heat producing crystallizing phase, Albite-rich Plagioclase Feldspar. Run 1 is shown to cool fastest and run 3 cool the slowest, with surface emissivity having the strongest cooling influence in the upper tens of meters of the conduit in all runs.

  8. Incorporation of cooling-induced crystallisation into a 2-dimensional axisymmetric conduit heat flow model

    NASA Astrophysics Data System (ADS)

    Heptinstall, D. A.; Neuberg, J. W.; Bouvet de Maisonneuve, C.; Collinson, A.; Taisne, B.; Morgan, D. J.

    2015-12-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic systems. We shall investigate the thermal processes and timescales in a crystallizing, static magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/°C (runs 1 & 3) and 0.2MPa/°C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69e5 J/kg*K, 9.32e5 J/kg*K, and 9.49e5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the center of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10 m depth, it takes 4.1-9.2 years for the magma column to cool over 108-130oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and dominant latent heat producing crystallizing phases (Quartz), where run 1 cools fastest and run 3 cools slowest. Surface cooling by comparison has the strongest influence on the upper tens of meters in all runs.

  9. Impact of MODIS High-Resolution Sea-Surface Temperatures on WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; LaCasse, Katherine M.; Dembek, Scott R.; Santos, Pablo; Lapenta, William M.

    2007-01-01

    Over the past few years,studies at the Short-term Prediction Research and Transition (SPoRT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) composite sea-surface temperature (SST) products in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. The recent paper by LaCasse et al. (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPoRT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The scientific hypothesis being tested is: More accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running the WRF system in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software; The EMS is a standalone modeling system capable of downloading the necessary daily datasets, and initializing, running and displaying WRF forecasts in the NWS Advanced Weather Interactive Processing System (AWIPS) with little intervention required by forecasters. Twenty-seven hour forecasts are run daily with start times of 0300,0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and the far western portions of the Bahamas, the Florida Keys, the Straights of Florida, and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS, invoking the diabatic. "hot-start" capability. In this WRF model "hot-start", the LAPS-analyzed cloud and precipitation features are converted into model microphysics fields with enhanced vertical velocity profiles, effectively reducing the model spin-up time required to predict precipitation systems. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at l/12 degree resolution (approx. 9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPoRT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA in every respect except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. The MODIS SST composites for initializing the SPoRT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST composites into the SPoRTWRF runs is staggered such that the 0400UTC composite initializes the 0900 UTC WRF, the 0700 UTC composite initializes the 1500 UTC WRF, the 1600 UTC composite initializes the 2100 UTC WRF, and the 1900 UTC composite initializes the 0300 UTC WRF. A comparison of the SPoRT and Miami forecasts is underway in 2007, and includes quantitative verification of near-surface temperature, dewpoint, and wind forecasts at surface observation locations. In addition, particular days of interest are being analyzed to determine the impact of the MODIS SST data on the development and evolution of predicted sea/land-breeze circulations, clouds, and precipitation. This paper will present verification results comparing the NWS MIA forecasts the SPoRT experimental WRF forecasts, and highlight any substantial differences noted in the predicted mesoscale phenomena.

  10. Preliminary Results of a U.S. Deep South Warm Season Deep Convective Initiation Modeling Experiment using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    NASA Technical Reports Server (NTRS)

    Medlin, Jeffrey M.; Wood, Lance; Zavodsky, Brad; Case, Jon; Molthan, Andrew

    2012-01-01

    The initiation of deep convection during the warm season is a forecast challenge in the relative high instability and low wind shear environment of the U.S. Deep South. Despite improved knowledge of the character of well known mesoscale features such as local sea-, bay- and land-breezes, observations show the evolution of these features fall well short in fully describing the location of first initiates. A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA s Short-term Prediction Research and Transition (SPoRT) Center was undertaken during the 2012 warm season to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System. The NASA products were: a 4-km Land Information System data, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with a 9 km outer grid spacing and a 3 km inner nest spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the forecast timing and location of the first initiates, with a focus on the impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  11. EnOI-IAU Initialization Scheme Designed for Decadal Climate Prediction System IAP-DecPreS

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhou, Tianjun; Zheng, Fei

    2018-02-01

    A decadal climate prediction system named as IAP-DecPreS was constructed in the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, based on a fully coupled model FGOALS-s2 and a newly developed initialization scheme, referred to as EnOI-IAU. In this paper, we introduce the design of the EnOI-IAU scheme, assess the accuracies of initialization integrations using the EnOI-IAU and preliminarily evaluate hindcast skill of the IAP-DecPreS. The EnOI-IAU scheme integrates two conventional assimilation approaches, ensemble optimal interpolation (EnOI) and incremental analysis update (IAU). The EnOI and IAU were applied to calculate analysis increments and incorporate them into the model, respectively. Three continuous initialization (INIT) runs were conducted for the period of 1950-2015, in which observational sea surface temperature (SST) from the HadISST1.1 and subsurface ocean temperature profiles from the EN4.1.1 data set were assimilated. Then nine-member 10 year long hindcast runs initiated from the INIT runs were conducted for each year in the period of 1960-2005. The accuracies of the INIT runs are evaluated from the following three aspects: upper 700 m ocean temperature, temporal evolution of SST anomalies, and dominant interdecadal variability modes, Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO). Finally, preliminary evaluation of the ensemble mean of the hindcast runs suggests that the IAP-DecPreS has skill in the prediction of the PDO-related SST anomalies in the midlatitude North Pacific and AMO-related SST anomalies in the tropical North Atlantic.

  12. An Operational Configuration of the ARPS Data Analysis System to Initialize WRF in the NM'S Environmental Modeling System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan; Blottman, Pete; Hoeth, Brian; Oram, Timothy

    2006-01-01

    The Weather Research and Forecasting (WRF) model is the next generation community mesoscale model designed to enhance collaboration between the research and operational sectors. The NM'S as a whole has begun a transition toward WRF as the mesoscale model of choice to use as a tool in making local forecasts. Currently, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) are running the Advanced Regional Prediction System (AIRPS) Data Analysis System (ADAS) every 15 minutes over the Florida peninsula to produce high-resolution diagnostics supporting their daily operations. In addition, the NWS MLB and SMG have used ADAS to provide initial conditions for short-range forecasts from the ARPS numerical weather prediction (NWP) model. Both NM'S MLB and SMG have derived great benefit from the maturity of ADAS, and would like to use ADAS for providing initial conditions to WRF. In order to assist in this WRF transition effort, the Applied Meteorology Unit (AMU) was tasked to configure and implement an operational version of WRF that uses output from ADAS for the model initial conditions. Both agencies asked the AMU to develop a framework that allows the ADAS initial conditions to be incorporated into the WRF Environmental Modeling System (EMS) software. Developed by the NM'S Science Operations Officer (S00) Science and Training Resource Center (STRC), the EMS is a complete, full physics, NWP package that incorporates dynamical cores from both the National Center for Atmospheric Research's Advanced Research WRF (ARW) and the National Centers for Environmental Prediction's Non-Hydrostatic Mesoscale Model (NMM) into a single end-to-end forecasting system. The EMS performs nearly all pre- and postprocessing and can be run automatically to obtain external grid data for WRF boundary conditions, run the model, and convert the data into a format that can be readily viewed within the Advanced Weather Interactive Processing System. The EMS has also incorporated the WRF Standard Initialization (SI) graphical user interface (GUT), which allows the user to set up the domain, dynamical core, resolution, etc., with ease. In addition to the SI GUT, the EMS contains a number of configuration files with extensive documentation to help the user select the appropriate input parameters for model physics schemes, integration timesteps, etc. Therefore, because of its streamlined capability, it is quite advantageous to configure ADAS to provide initial condition data to the EMS software. One of the biggest potential benefits of configuring ADAS for ingest into the EMS is that the analyses could be used to initialize either the ARW or NMM. Currently, the ARPS/ADAS software has a conversion routine only for the ARW dynamical core. However, since the NIvIM runs about 2.5 times faster than the ARW, it is quite advantageous to be able to run an ADAS/NMM configuration operationally due to the increased efficiency.

  13. A Multi-Season Study of the Effects of MODIS Sea-Surface Temperatures on Operational WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.

    2008-01-01

    Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.

  14. A Modeling and Verification Study of Summer Precipitation Systems Using NASA Surface Initialization Datasets

    NASA Technical Reports Server (NTRS)

    Jonathan L. Case; Kumar, Sujay V.; Srikishen, Jayanthi; Jedlovec, Gary J.

    2010-01-01

    One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse-type convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within parameterization schemes, model resolution limitations, and uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture, soil temperature, and sea surface temperature (SST) are necessary to better simulate the interactions between the surface and atmosphere, and ultimately improve predictions of summertime pulse convection. This paper describes a sensitivity experiment using the Weather Research and Forecasting (WRF) model. Interpolated land and ocean surface fields from a large-scale model are replaced with high-resolution datasets provided by unique NASA assets in an experimental simulation: the Land Information System (LIS) and Moderate Resolution Imaging Spectroradiometer (MODIS) SSTs. The LIS is run in an offline mode for several years at the same grid resolution as the WRF model to provide compatible land surface initial conditions in an equilibrium state. The MODIS SSTs provide detailed analyses of SSTs over the oceans and large lakes compared to current operational products. The WRF model runs initialized with the LIS+MODIS datasets result in a reduction in the overprediction of rainfall areas; however, the skill is almost equally as low in both experiments using traditional verification methodologies. Output from object-based verification within NCAR s Meteorological Evaluation Tools reveals that the WRF runs initialized with LIS+MODIS data consistently generated precipitation objects that better matched observed precipitation objects, especially at higher precipitation intensities. The LIS+MODIS runs produced on average a 4% increase in matched precipitation areas and a simultaneous 4% decrease in unmatched areas during three months of daily simulations.

  15. Possible options to slow down the advancement rate of Tarbela delta.

    PubMed

    Habib-Ur-Rehman; Rehman, Mirza Abdul; Naeem, Usman Ali; Hashmi, Hashim Nisar; Shakir, Abdul Sattar

    2017-12-22

    The pivot point of delta in Tarbela dam has reached at about 10.6 km from the dam face which may result in blocking of tunnels. Tarbela delta was modeled from 1979 to 2060 using hec-6 model. Initially, the model was calibrated for year 1999 and validated for years 2000, 2001, 2002, and 2006 by involving the data of sediment concentration, reservoir cross sections (73 range lines), elevation-area capacity curves, and inflows and outflows from the reservoir. Then, the model was used to generate future scenarios, i.e., run-1, run-2, and run-3 with pool levels; 428, 442, and 457 m, respectively, till 2060. Results of run-1 and run-2 showed advancement to choke the tunnels by 2010 and 2030, respectively. Finally, in run-3, the advancement was further delayed showing that tunnels 1 and 2 will be choked by year 2050 and pivot point will reach at 6.4 km from the dam face.

  16. Simple estimation of linear 1+1 D tsunami run-up

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Campos, J. A.; Riquelme, S.

    2016-12-01

    An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.

  17. A Portable Regional Weather and Climate Downscaling System Using GEOS-5, LIS-6, WRF, and the NASA Workflow Tool

    NASA Astrophysics Data System (ADS)

    Kemp, E. M.; Putman, W. M.; Gurganus, J.; Burns, R. W.; Damon, M. R.; McConaughy, G. R.; Seablom, M. S.; Wojcik, G. S.

    2009-12-01

    We present a regional downscaling system (RDS) suitable for high-resolution weather and climate simulations in multiple supercomputing environments. The RDS is built on the NASA Workflow Tool, a software framework for configuring, running, and managing computer models on multiple platforms with a graphical user interface. The Workflow Tool is used to run the NASA Goddard Earth Observing System Model Version 5 (GEOS-5), a global atmospheric-ocean model for weather and climate simulations down to 1/4 degree resolution; the NASA Land Information System Version 6 (LIS-6), a land surface modeling system that can simulate soil temperature and moisture profiles; and the Weather Research and Forecasting (WRF) community model, a limited-area atmospheric model for weather and climate simulations down to 1-km resolution. The Workflow Tool allows users to customize model settings to user needs; saves and organizes simulation experiments; distributes model runs across different computer clusters (e.g., the DISCOVER cluster at Goddard Space Flight Center, the Cray CX-1 Desktop Supercomputer, etc.); and handles all file transfers and network communications (e.g., scp connections). Together, the RDS is intended to aid researchers by making simulations as easy as possible to generate on the computer resources available. Initial conditions for LIS-6 and GEOS-5 are provided by Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis data stored on DISCOVER. The LIS-6 is first run for 2-4 years forced by MERRA atmospheric analyses, generating initial conditions for the WRF soil physics. GEOS-5 is then initialized from MERRA data and run for the period of interest. Large-scale atmospheric data, sea-surface temperatures, and sea ice coverage from GEOS-5 are used as boundary conditions for WRF, which is run for the same period of interest. Multiply nested grids are used for both LIS-6 and WRF, with the innermost grid run at a resolution sufficient for typical local weather features (terrain, convection, etc.) All model runs, restarts, and file transfers are coordinated by the Workflow Tool. Two use cases are being pursued. First, the RDS generates regional climate simulations down to 4-km for the Chesapeake Bay region, with WRF output provided as input to more specialized models (e.g., ocean/lake, hydrological, marine biology, and air pollution). This will allow assessment of climate impact on local interests (e.g., changes in Bay water levels and temperatures, innundation, fish kills, etc.) Second, the RDS generates high-resolution hurricane simulations in the tropical North Atlantic. This use case will support Observing System Simulation Experiments (OSSEs) of dynamically-targeted lidar observations as part of the NASA Sensor Web Simulator project. Sample results will be presented at the AGU Fall Meeting.

  18. Does finance affect environmental degradation: evidence from One Belt and One Road Initiative region?

    PubMed

    Hafeez, Muhammad; Chunhui, Yuan; Strohmaier, David; Ahmed, Manzoor; Jie, Liu

    2018-04-01

    This paper explores the effects of finance on environmental degradation and investigates environmental Kuznets curve (EKC) of each country among 52 that participate in the One Belt and One Road Initiative (OBORI) using the latest long panel data span (1980-2016). We utilized panel long run econometric models (fully modified ordinary least square and dynamic ordinary least square) to explore the long-run estimates in full panel and country level. Moreover, the Dumitrescu and Hurlin (2012) causality test is applied to examine the short-run causalities among our considered variables. The empirical findings validate the EKC hypothesis; the long-run estimates point out that finance significantly enhances the environmental degradation (negatively in few cases). The short-run heterogeneous causality confirms the bi-directional causality between finance and environmental degradation. The empirical outcomes suggest that policymakers should consider the environmental degradation issue caused by financial development in the One Belt and One Road region.

  19. Preliminary Results of a U.S. Deep South Modeling Experiment Using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    NASA Technical Reports Server (NTRS)

    Wood, Lance; Medlin, Jeffrey M.; Case, Jon

    2012-01-01

    A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA Short-term Prediction Research and Transition (SPoRT) Center began during the 2011-2012 cold season, and continued into the 2012 warm season. The focus was on two frequent U.S. Deep South forecast challenges: the initiation of deep convection during the warm season; and heavy precipitation during the cold season. We wanted to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System in improving the model representation of mesoscale boundaries such as the local sea-, bay- and land-breezes (which often leads to warm season convective initiation); and improving the model representation of slow moving, or quasi-stationary frontal boundaries (which focus cold season storm cell training and heavy precipitation). The NASA products were: the 4-km Land Information System, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with an outer grid with a 9 km spacing and an inner nest with a 3 km grid spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the positive and negative impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  20. Initial test results using the GEOS-3 engineering model altimeter

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Clary, J. B.

    1977-01-01

    Data from a series of experimental tests run on the engineering model of the GEOS 3 radar altimeter using the Test and Measurement System (TAMS) designed for preflight testing of the radar altimeter are presented. These tests were conducted as a means of preparing and checking out a detailed test procedure to be used in running similar tests on the GEOS 3 protoflight model altimeter systems. The test procedures and results are also included.

  1. RHIC Au beam in Run 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S. Y.

    Au beam at the RHIC ramp in run 2014 is reviewed together with the run 2011 and run 2012. Observed bunch length and longitudinal emittance are compared with the IBS simulations. The IBS growth rate of the longitudinal emittance in run 2014 is similar to run 2011, and both are larger than run 2012. This is explained by the large transverse emittance at high intensity observed in run 2012, but not in run 2014. The big improvement of the AGS ramping in run 2014 might be related to this change. The importance of the injector intensity improvement in run 2014more » is emphasized, which gives rise to the initial luminosity improvement of 50% in run 2014, compared with the previous Au-Au run 2011. In addition, a modified IBS model, which is calibrated using the RHIC Au runs from 9.8 GeV/n to 100 GeV/n, is presented and used in the study.« less

  2. Overuse Injury Assessment Model

    DTIC Science & Technology

    2003-06-01

    initial physical fitness level foot type lower extremity alignment altered gait pretest anthropometry diet and nutrition genetics endocrine status and...using published anthropometry values. Assuming that these forces are the primary loads that cause the tibia to undergo shear and bending, the maximal...both the model and in vivo results suggest that the ratio of walking to running bone stress is 0.54. Table 3-3 Estimated walk/march and run tensile

  3. High-Resolution Specification of the Land and Ocean Surface for Improving Regional Mesoscale Model Predictions

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Lazarus, Steven M.; Splitt, Michael E.; Crosson, William L.; Lapenta, William M.; Jedlovec, Gary J.; Peters-Lidard, Christa D.

    2008-01-01

    The exchange of energy and moisture between the Earth's surface and the atmospheric boundary layer plays a critical role in many meteorological processes. High-resolution, accurate representations of surface properties such as sea-surface temperature (SST), soil temperature and moisture content, ground fluxes, and vegetation are necessary to better understand the Earth-atmosphere interactions and improve numerical predictions of sensible weather. The NASA Short-term Prediction Research and Transition (SPoRT) Center has been conducting separate studies to examine the impacts of high-resolution land-surface initialization data from the Goddard Space Flight Center Land Information System (LIS) on subsequent WRF forecasts, as well as the influence of initializing WRF with SST composites derived from the MODIS instrument. This current project addresses the combined impacts of using high-resolution lower boundary data over both land (LIS data) and water (MODIS SSTs) on the subsequent daily WRF forecasts over Florida during May 2004. For this experiment, the WRF model is configured to run on a nested domain with 9- km and 3-kin grid spacing, centered on the Florida peninsula and adjacent coastal waters of the Gulf of Mexico and Atlantic Ocean. A control configuration of WRF is established to take all initial condition data from the NCEP Eta model. Meanwhile, two WRF experimental runs are configured to use high-resolution initialization data from (1) LIS land-surface data only, and (2) a combination of LIS data and high-resolution MODIS SST composites. The experiment involves running 24-hour simulations of the control WRF configuration, the MS-initialized WRF, and the LIS+MODIS-initialized WRF daily for the entire month of May 2004. All atmospheric data for initial and boundary conditions for the Control, LIS, and LIS+MODIS runs come from the NCEP Eta model on a 40-km grid. Verification statistics are generated at land surface observation sites and buoys, and the impacts of the high-resolution lower boundary data on the development and evolution of mesoscale circulations such as sea and land breezes are examined, This paper will present the results of these WRF modeling experiments using LIS and MODIS lower boundary datasets over the Florida peninsula during May 2004.

  4. AMPK agonist AICAR delays the initial decline in lifetime-apex V̇o2 peak, while voluntary wheel running fails to delay its initial decline in female rats.

    PubMed

    Toedebusch, Ryan G; Ruegsegger, Gregory N; Braselton, Joshua F; Heese, Alexander J; Hofheins, John C; Childs, Tom E; Thyfault, John P; Booth, Frank W

    2016-02-01

    There has never been an outcome measure for human health more important than peak oxygen consumption (V̇o2 peak), yet little is known regarding the molecular triggers for its lifetime decline with aging. We examined the ability of physical activity or 5 wk of 5-aminoimidazole-4-carboxamide-1-β-d-ribofuranoside (AICAR) administration to delay the initial aging-induced decline in lifetime-apex V̇o2 peak and potential underlying molecular mechanisms. Experiment 1 consisted of female rats with (RUN) and without (NO RUN) running wheels, while experiment 2 consisted of female nonrunning rats getting the AMPK agonist AICAR (0.5 mg/g/day) subcutaneously for 5 wk beginning at 17 wk of age. All rats underwent frequent, weekly or biweekly V̇o2 peak tests beginning at 10 wk of age. In experiment 1, lifetime-apex V̇o2 peak occurred at 19 wk of age in both RUN and NO RUN and decreased thereafter. V̇o2 peak measured across experiment 1 was ∼25% higher in RUN than in NO RUN. In experiment 2, AICAR delayed the chronological age observed in experiment 1 by 1 wk, from 19 wk to 20 wk of age. RUN and NO RUN showed different skeletal muscle transcriptomic profiles both pre- and postapex. Additionally, growth and development pathways are differentially regulated between RUN and NO RUN. Angiomotin mRNA was downregulated postapex in RUN and NO RUN. Furthermore, strong significant correlations to V̇o2 peak and trends for decreased protein concentration supports angiomotin's potential importance in our model. Contrary to our primary hypothesis, wheel running was not sufficient to delay the chronological age of lifetime-apex V̇o2 peak decline, whereas AICAR delayed it 1 wk. Copyright © 2016 the American Physiological Society.

  5. Can we trust climate models to realistically represent severe European windstorms?

    NASA Astrophysics Data System (ADS)

    Trzeciak, Tomasz M.; Knippertz, Peter; Owen, Jennifer S. R.

    2014-05-01

    Despite the enormous advances made in climate change research, robust projections of the position and the strength of the North Atlantic stormtrack are not yet possible. In particular with respect to damaging windstorms, this incertitude bears enormous risks to European societies and the (re)insurance industry. Previous studies have addressed the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data and found that there is large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such statistical evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms. Compensating effects between the two might conceal errors and suggest higher reliability than there really is. A possible way to separate influences of fast and slow processes in climate projections is through a "seamless" approach of hindcasting historical, severe storms with climate models started from predefined initial conditions and run in a numerical weather prediction mode on the time scale of several days. Such a cost-effective case-study approach, which draws from and expands on the concepts from the Transpose-AMIP initiative, has recently been undertaken in the SEAMSEW project at the University of Leeds funded by the AXA Research Fund. Key results from this work focusing on 20 historical storms and using different lead times and horizontal and vertical resolutions include: (a) Tracks are represented reasonably well by most hindcasts. (b) Sensitivity to vertical resolution is low. (c) There is a systematic underprediction of cyclone depth for a coarse resolution of T63, but surprisingly no systematic bias is found for higher-resolution runs using T127, showing that climate models are in fact able to represent the storm dynamics well, if given the correct initial conditions. Combined with a too low number of deep cyclones in many climate models, this points too an insufficient number of storm-prone initial conditions in free-running climate runs. This question will be addressed in future work.

  6. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models

    PubMed Central

    Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.

    2007-01-01

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921

  7. Assimilation of Cloud Information in Numerical Weather Prediction Model in Southwest China

    NASA Astrophysics Data System (ADS)

    HENG, Z.

    2016-12-01

    Based on the ARPS Data Analysis System (ADAS), Weather Research and Forecasting (WRF) model, simulation experiments from July 1st 2015 to August 1st 2015 are conducted in the region of Southwest China. In the assimilation experiment (EXP), datasets from surface observations are assimilated, cloud information from weather Doppler radar, Fengyun-2E (FY-2E) geostationary satellite are retrieved by using the complex cloud analysis scheme in the ADAS, to insert microphysical variables and adjust the humility structure in the initial condition. As a control run (CTL), datasets from surface observations are assimilated, but no cloud information is used in the ADAS. The simulation result of a rainstorm caused by the Southwest Vortex during 14-15 July 2015 shows that, the EXP run has a better capability in representing the shape and intensity of precipitation, especially the center of rainstorm. The one-month inter-comparison of the initial and prediction results between the EXP and CTL runs reveled that, EXP runs can present a more reasonable phenomenon of rain and get a higher score in the rain prediction. Keywords: NWP, rainstorm, Data assimilation

  8. ISMIP6 - initMIP: Greenland ice sheet model initialisation experiments

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Payne, Tony; Larour, Eric; Abe Ouchi, Ayako; Gregory, Jonathan; Lipscomb, William; Seroussi, Helene; Shepherd, Andrew; Edwards, Tamsin

    2016-04-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. This intercomparison exercise (initMIP) aims at comparing, evaluating and improving the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experiments are conceived for the large-scale Greenland ice sheet and are designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The latter experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss first results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  9. "Hit-and-Run" leaves its mark: catalyst transcription factors and chromatin modification.

    PubMed

    Varala, Kranthi; Li, Ying; Marshall-Colón, Amy; Para, Alessia; Coruzzi, Gloria M

    2015-08-01

    Understanding how transcription factor (TF) binding is related to gene regulation is a moving target. We recently uncovered genome-wide evidence for a "Hit-and-Run" model of transcription. In this model, a master TF "hits" a target promoter to initiate a rapid response to a signal. As the "hit" is transient, the model invokes recruitment of partner TFs to sustain transcription over time. Following the "run", the master TF "hits" other targets to propagate the response genome-wide. As such, a TF may act as a "catalyst" to mount a broad and acute response in cells that first sense the signal, while the recruited TF partners promote long-term adaptive behavior in the whole organism. This "Hit-and-Run" model likely has broad relevance, as TF perturbation studies across eukaryotes show small overlaps between TF-regulated and TF-bound genes, implicating transient TF-target binding. Here, we explore this "Hit-and-Run" model to suggest molecular mechanisms and its biological relevance. © 2015 The Authors. Bioessays published by WILEY Periodicals, Inc.

  10. Reciprocal Sliding Friction Model for an Electro-Deposited Coating and Its Parameter Estimation Using Markov Chain Monte Carlo Method

    PubMed Central

    Kim, Kyungmok; Lee, Jaewook

    2016-01-01

    This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359

  11. Modulation of Soil Initial State on WRF Model Performance Over China

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun

    2017-11-01

    The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.

  12. MM5 Modeling of the Madden Julian Oscillation in the Indian and West Pacific Oceans: Implications of 30 70-Day Boundary Effects on MJO Development.

    NASA Astrophysics Data System (ADS)

    Gustafson, William I., Jr.; Weare, Bryan C.

    2004-03-01

    The results of an experiment designed to isolate the initiation phase of the Madden Julian oscillation (MJO) from 30 70-day boundary effects is presented. The technique used to accomplish this involves employing the fifth-generation Pennsylvania State University National Center for Atmospheric Research (PSU NCAR) Mesoscale Model (MM5), as first presented in the companion paper to this paper. Two runs, each 2 yr long, are integrated forward from 1 June 1990. The first run, called the control, uses the unmodified National Centers for Environmental Prediction (NCEP) NCAR reanalysis (NRA) dataset for boundary conditions. The second run, called the notched, uses the same NRA dataset for the boundary conditions, with the exception that all signals with periodicities in the 30 70-day range have been removed. Any signals in the 30 70-day range subsequently generated by the notched run are then solely due to signals generated from within the model domain or from signals entering through the domain boundaries with frequencies outside of the MJO band. Comparisons between 2-yr means from each run indicate that filtering the boundaries does not significantly modify the model climatology. The mean wind structure, thermodynamic state, and outgoing longwave radiation (OLR) are almost identical in the control and notched runs. A 30 70-day bandpass filter is used to isolate MJO-like signals in the runs. Comparisons of 30 70-day bandpassed zonal wind, moist static energy (MSE), and OLR reveal that the notched run develops many of the expected characteristics of MJO episodes, but with a weaker signal. Large-scale, organized structures develop that possess seasonal shifts in amplitude, mirroring observed MJO activity, have opposite wind directions in the upper and lower troposphere, and propagate eastward during most strong episodes. The results suggest that neither remnants from previous MJO episodes nor extratropical feedbacks within the MJO time band are necessary for MJO initiation. However, the control run is more organized than the notched run, implying that 30 70 signals outside the model domain influence the MJO signal. There is also some evidence that the recharge discharge mechanism plays a role in MJO formation.

  13. The US CLIVAR Working Group on Drought: A Multi-Model Assessment of the Impact of SST Anomalies on Regional Drought

    NASA Astrophysics Data System (ADS)

    Schubert; Drought Working Group, S.

    2008-12-01

    The USCLIVAR working group on drought recently initiated a series of global climate model simulations forced with idealized SST anomaly patterns, designed to address a number of uncertainties regarding the impact of SST forcing and the role of land-atmosphere feedbacks on regional drought. Specific questions that the runs are designed to address include: What are mechanisms that maintain drought across the seasonal cycle and from one year to the next. What is the role of the land? What is the role of the different ocean basins, including the impact of El Nino/Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO), the Atlantic Multi-decadal Oscillation (AMO), and warming trends in the global oceans? The runs were done with several global atmospheric models including NASA/NSIPP-1, NCEP/GFS, GFDL/AM2, and NCAR CCM3 and CAM3. In addition, runs were done with the NCEP CFS (coupled atmosphere-ocean) model by employing a novel adjustment technique to nudge the coupled model towards the imposed SST forcing patterns. This talk provides an overview of the experiments and some initial results.

  14. The US CLIVAR Working Group on Drought: A Multi-Model Assessment of the Impact of SST Anomalies on Regional Drought

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried

    2008-01-01

    The US CLIVAR working group on drought recently initiated a series of global climate model simulations forced with idealized SST anomaly patterns, designed to address a number of uncertainties regarding the impact of SST forcing and the role of land-atmosphere feedbacks on regional drought. Specific questions that the runs are designed to address include: What are mechanisms that maintain drought across the seasonal cycle and from one year to the next. What is the role of the land? What is the role of the different ocean basins, including the impact of EL Nino/Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO), the Atlantic Multi-decadal Oscillation (AMO), and warming trends in the global oceans? The runs were done with several global atmospheric models including NASA/NSIPP-1, NCEP/GFS, GFDL/AM2, and NCAR CCM3 and CAM3. In addition, runs were done with the NCEP CFS (coupled atmosphere-ocean) model by employing a novel adjustment technique to nudge the coupled model towards the imposed SST forcing patterns. This talk provides an overview of the experiments and some initial results.

  15. Toward Improved Land Surface Initialization in Support of Regional WRF Forecasts at the Kenya Meteorological Service (KMS)

    NASA Technical Reports Server (NTRS)

    Case, Johnathan L.; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.

    2014-01-01

    Flooding and drought are two key forecasting challenges for the Kenya Meteorological Service (KMS). Atmospheric processes leading to excessive precipitation and/or prolonged drought can be quite sensitive to the state of the land surface, which interacts with the planetary boundary layer (PBL) of the atmosphere providing a source of heat and moisture. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface, particularly within weakly-sheared environments such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in land surface and numerical weather prediction (NWP) models. Enhanced regional modeling capabilities have the potential to improve forecast guidance in support of daily operations and high-impact weather over eastern Africa. KMS currently runs a configuration of the Weather Research and Forecasting (WRF) NWP model in real time to support its daily forecasting operations, making use of the NOAA/National Weather Service (NWS) Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the KMS-WRF runs on a regional grid over eastern Africa. Two organizations at the NASA Marshall Space Flight Center in Huntsville, AL, SERVIR and the Shortterm Prediction Research and Transition (SPoRT) Center, have established a working partnership with KMS for enhancing its regional modeling capabilities through new datasets and tools. To accomplish this goal, SPoRT and SERVIR is providing enhanced, experimental land surface initialization datasets and model verification capabilities to KMS as part of this collaboration. To produce a land-surface initialization more consistent with the resolution of the KMS-WRF runs, the NASA Land Information System (LIS) is run at a comparable resolution to provide real-time, daily soil initialization data in place of data interpolated from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model soil moisture and temperature fields. Additionally, realtime green vegetation fraction (GVF) data from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi- NPP) satellite will be incorporated into the KMS-WRF runs, once it becomes publicly available from the National Environmental Satellite Data and Information Service (NESDIS). Finally, model verification capabilities will be transitioned to KMS using the Model Evaluation Tools (MET; Brown et al. 2009) package in conjunction with a dynamic scripting package developed by SPoRT (Zavodsky et al. 2014), to help quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. Furthermore, the transition of these MET tools will enable KMS to monitor model forecast accuracy in near real time. This paper presents preliminary efforts to improve land surface model initialization over eastern Africa in support of operations at KMS. The remainder of this extended abstract is organized as follows: The collaborating organizations involved in the project are described in Section 2; background information on LIS and the configuration for eastern Africa is presented in Section 3; the WRF configuration used in this modeling experiment is described in Section 4; sample experimental WRF output with and without LIS initialization data are given in Section 5; a summary is given in Section 6 followed by acknowledgements and references.

  16. Recent Upgrades to NASA SPoRT Initialization Datasets for the Environmental Modeling System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Lafontaine, Frank J.; Molthan, Andrew L.; Zavodsky, Bradley T.; Rozumalski, Robert A.

    2012-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its NOAA/National Weather Service (NWS) partners that can initialize specific fields for local model runs within the NOAA/NWS Science and Training Resource Center Environmental Modeling System (EMS). The suite of SPoRT products for use in the EMS consists of a Sea Surface Temperature (SST) composite that includes a Lake Surface Temperature (LST) analysis over the Great Lakes, a Great Lakes sea-ice extent within the SST composite, a real-time Green Vegetation Fraction (GVF) composite, and NASA Land Information System (LIS) gridded output. This paper and companion poster describe each dataset and provide recent upgrades made to the SST, Great Lakes LST, GVF composites, and the real-time LIS runs.

  17. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  18. Improving Air Quality Forecasts with AURA Observations

    NASA Technical Reports Server (NTRS)

    Newchurch, M. J.; Biazer, A.; Khan, M.; Koshak, W. J.; Nair, U.; Fuller, K.; Wang, L.; Parker, Y.; Williams, R.; Liu, X.

    2008-01-01

    Past studies have identified model initial and boundary conditions as sources of reducible errors in air-quality simulations. In particular, improving the initial condition improves the accuracy of short-term forecasts as it allows for the impact of local emissions to be realized by the model and improving boundary conditions improves long range transport through the model domain, especially in recirculating anticyclones. During the August 2006 period, we use AURA/OMI ozone measurements along with MODIS and CALIPSO aerosol observations to improve the initial and boundary conditions of ozone and Particulate Matter. Assessment of the model by comparison of the control run and satellite assimilation run to the IONS06 network of ozonesonde observations, which comprise the densest ozone sounding campaign ever conducted in North America, to AURA/TES ozone profile measurements, and to the EPA ground network of ozone and PM measurements will show significant improvement in the CMAQ calculations that use AURA initial and boundary conditions. Further analyses of lightning occurrences from ground and satellite observations and AURA/OMI NO2 column abundances will identify the lightning NOx signal evident in OMI measurements and suggest pathways for incorporating the lightning and NO2 data into the CMAQ simulations.

  19. Asia Pacific Research Initiative for Sustainable Energy Systems 2011 (APRISES11)

    DTIC Science & Technology

    2017-09-29

    created during a single run , highlighting rapid prototyping capabilities. NRL’s overall goal was to evaluate whether 3D printed metallic bipolar plates...varying the air flow to evaluate the effect on peak power. These runs are displayed in Figure 2.1.17. The reactants were connected in co-flow with the...way valve allows the operator to either run the gas through a humidifier (PermaPure Model FCl 25-240-7) or a bypass loop. On the humidifier side of

  20. Dynamically linking economic models to ecological condition for coastal zone management: Application to sustainable tourism planning.

    PubMed

    Dvarskas, Anthony

    2017-03-01

    While the development of the tourism industry can bring economic benefits to an area, it is important to consider the long-run impact of the industry on a given location. Particularly when the tourism industry relies upon a certain ecological state, those weighing different development options need to consider the long-run impacts of increased tourist numbers upon measures of ecological condition. This paper presents one approach for linking a model of recreational visitor behavior with an ecological model that estimates the impact of the increased visitors upon the environment. Two simulations were run for the model using initial parameters available from survey data and water quality data for beach locations in Croatia. Results suggest that the resilience of a given tourist location to the changes brought by increasing tourism numbers is important in determining its long-run sustainability. Further work should investigate additional model components, including the tourism industry, refinement of the relationships assumed by the model, and application of the proposed model in additional areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    PubMed

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system-wide opportunities for the implementation of sustainable RRI prevention interventions. This 'big picture' perspective represents the first step required when thinking about the range of contributory causal factors that affect other system elements, as well as runners' behaviours in relation to RRI risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. WRF Simulation over the Eastern Africa by use of Land Surface Initialization

    NASA Astrophysics Data System (ADS)

    Sakwa, V. N.; Case, J.; Limaye, A. S.; Zavodsky, B.; Kabuchanga, E. S.; Mungai, J.

    2014-12-01

    The East Africa region experiences severe weather events associated with hazards of varying magnitude. It receives heavy precipitation which leads to wide spread flooding and lack of sufficient rainfall in some parts results into drought. Cases of flooding and drought are two key forecasting challenges for the Kenya Meteorological Service (KMS). The source of heat and moisture depends on the state of the land surface which interacts with the boundary layer of the atmosphere to produce excessive precipitation or lack of it that leads to severe drought. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface within weakly-sheared environments, such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in numerical weather prediction models. Improved modeling capabilities within the region have the potential to enhance forecast guidance in support of daily operations and high-impact weather over East Africa. KMS currently runs a configuration of the Weather Research and Forecasting (WRF) model in real time to support its daily forecasting operations, invoking the Non-hydrostatic Mesoscale Model (NMM) dynamical core. They make use of the National Oceanic and Atmospheric Administration / National Weather Service Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the WRF-NMM model runs on a 7-km regional grid over Eastern Africa.SPoRT and SERVIR provide land surface initialization datasets and model verification tool. The NASA Land Information System (LIS) provide real-time, daily soil initialization data in place of interpolated Global Forecast System soil moisture and temperature data. Model verification is done using the Model Evaluation Tools (MET) package, in order to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. These MET tools enable KMS to monitor model forecast accuracy in near real time. This study highlights verification results of WRF runs over East Africa using the LIS land surface initialization.

  3. Results of the Greenland Ice Sheet Model Initialisation Experiments ISMIP6 - initMIP-Greenland

    NASA Astrophysics Data System (ADS)

    Goelzer, H.; Nowicki, S.; Edwards, T.; Beckley, M.; Abe-Ouchi, A.; Aschwanden, A.; Calov, R.; Gagliardini, O.; Gillet-chaulet, F.; Golledge, N. R.; Gregory, J. M.; Greve, R.; Humbert, A.; Huybrechts, P.; Larour, E. Y.; Lipscomb, W. H.; Le ´h, S.; Lee, V.; Kennedy, J. H.; Pattyn, F.; Payne, A. J.; Rodehacke, C. B.; Rückamp, M.; Saito, F.; Schlegel, N.; Seroussi, H. L.; Shepherd, A.; Sun, S.; Vandewal, R.; Ziemen, F. A.

    2016-12-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. The goal of this intercomparison exercise (initMIP-Greenland) is to compare, evaluate and improve the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss final results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  4. Investigation of Small-Caliber Primer Function Using a Multiphase Computational Model

    DTIC Science & Technology

    2008-07-01

    all solid walls along with specified inflow at the primer orifice (0.102 cm < Y < 0.102 cm at X = 0). Initially , the entire flowfield is filled...to explicitly treat both the gas and solid phase. The model is based on the One Dimensional Turbulence modeling approach that has recently emerged as...a powerful tool in multiphase simulations. Initial results are shown for the model run as a stand-alone code and are compared to recent experiments

  5. The influence of initial and surface boundary conditions on a model-generated January climatology

    NASA Technical Reports Server (NTRS)

    Wu, K. F.; Spar, J.

    1981-01-01

    The influence on a model-generated January climate of various surface boundary conditions, as well as initial conditions, was studied by using the GISS coarse-mesh climate model. Four experiments - two with water planets, one with flat continents, and one with mountains - were used to investigate the effects of initial conditions, and the thermal and dynamical effects of the surface on the model generated-climate. However, climatological mean zonal-symmetric sea surface temperature is used in all four runs over the model oceans. Moreover, zero ground wetness and uniform ground albedo except for snow are used in the last experiments.

  6. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  7. Feedbacks between Air Pollution and Weather, Part 1: Effects on Weather

    EPA Science Inventory

    The meteorological predictions of fully coupled air-quality models running in “feedback” versus “nofeedback” simulations were compared against each other as part of Phase 2 of the Air Quality Model Evaluation International Initiative. The model simulations included a “no-feedback...

  8. Characteristics code for shock initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Partom, Y.

    1986-10-01

    We developed SHIN, a characteristics code for shock initiation studies. We describe in detail the equations of state, reaction model, rate equations, and numerical difference equations that SHIN incorporates. SHIN uses the previously developed surface burning reaction model which better represents the shock initiation process in TATB, than do bulk reaction models. A large number of computed simulations prove the code is a reliable and efficient tool for shock initiation studies. A parametric study shows the effect on build-up and run distance to detonation of (1) type of boundary condtion, (2) burning velocity curve, (3) shock duration, (4) rise timemore » in ramp loading, (5) initial density (or porosity) of the explosive, (6) initial temperature, and (7) grain size. 29 refs., 65 figs.« less

  9. Diabatic Initialization of Mesoscale Models in the Southeastern United States: Can 0 to 12h Warm Season QPF be Improved?

    NASA Technical Reports Server (NTRS)

    Lapenta, William M.; Bradshaw, Tom; Burks, Jason; Darden, Chris; Dembek, Scott

    2003-01-01

    It is well known that numerical warm season quantitative precipitation forecasts lack significant skill for numerous reasons. Some are related to the model--it may lack physical processes required to realistically simulate convection or the numerical algorithms and dynamics employed may not be adequate. Others are related to initialization-mesoscale features play an important role in convective initialization and atmospheric observation systems are incapable of properly depicting the three-dimensional stability structure at the mesoscale. The purpose of this study is to determine if a mesoscale model initialized with a diabatic initialization scheme can improve short-term (0 to 12h) warm season quantitative precipitation forecasts in the Southeastern United States. The Local Analysis and Prediction System (LAPS) developed at the Forecast System Laboratory is used to diabatically initialize the Pennsylvania State University/National center for Atmospheric Research (PSUNCAR) Mesoscale Model version 5 (MM5). The SPORT Center runs LAPS operationally on an hourly cycle to produce analyses on a 15 km covering the eastern 2/3 of the United States. The 20 km National Centers for Environmental Prediction (NCEP) Rapid Update Cycle analyses are used for the background fields. Standard observational data are acquired from MADIS with GOES/CRAFT Nexrad data acquired from in-house feeds. The MM5 is configured on a 140 x 140 12 km grid centered on Huntsville Alabama. Preliminary results indicate that MM5 runs initialized with LAPS produce improved 6 and 12h QPF threat scores compared with those initialized with the NCEP RUC.

  10. Street-running LRT may not affect a neighbour's sleep

    NASA Astrophysics Data System (ADS)

    Sarkar, S. K.; Wang, J.-N.

    2003-10-01

    A comprehensive dynamic finite difference model and analysis was conducted simulating LRT running at the speed of 24 km/h on a city street. The analysis predicted ground borne vibration (GBV) to remain at or below the FTA criterion of a RMS velocity of 72 VdB (0.004 in/s) at the nearest residence. In the model, site-specific stratography and dynamic soil and rock properties were used that were determined from in situ testing. The dynamic input load from LRT vehicle running at 24 km/h was computed from actual measured data from Portland, Oregon's West Side LRT project, which used a low floor vehicle similar to the one proposed for the NJ Transit project. During initial trial runs of the LRT system, vibration and noise measurements were taken at three street locations while the vehicles were running at about the 20-24 km/h operating speed. The measurements confirmed the predictions and satisfied FTA criteria for noise and vibration for frequent events. This paper presents the analytical model, GBV predictions, site measurement data and comparison with FTA criterion.

  11. Model Evaluation and Ensemble Modelling of Surface-Level Ozone in Europe and North America in the Context of AQMEII

    EPA Science Inventory

    More than ten state-of-the-art regional air quality models have been applied as part of the Air Quality Model Evaluation International Initiative (AQMEII). These models were run by twenty independent groups in Europe and North America. Standardised modelling outputs over a full y...

  12. Assessment of the MACC reanalysis and its influence as chemical boundary conditions for regional air quality modeling in AQMEII-2

    EPA Science Inventory

    The Air Quality Model Evaluation International Initiative (AQMEII) has now reached its second phase which is dedicated to the evaluation of online coupled chemistry-meteorology models. Sixteen modeling groups from Europe and five from North America have run regional air quality m...

  13. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.

    2018-04-01

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  14. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE PAGES

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; ...

    2018-04-19

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  15. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  16. Potential impact of initialization on decadal predictions as assessed for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Branstator, Grant; Teng, Haiyan

    2012-06-01

    To investigate the potential for initialization to improve decadal range predictions, we quantify the initial value predictability of upper 300 m temperature in the two northern ocean basins for 12 models from Coupled Model Intercomparison Project phase 5 (CMIP5), and we contrast it with the forced predictability in Representative Concentration Pathways (RCP) 4.5 climate change projections. We use a recently introduced method that produces predictability estimates from long control runs. Many initial states are considered, and we find on average 1) initialization has the potential to improve skill in the first 5 years in the North Pacific and the first 9 years in the North Atlantic, and 2) the impact from initialization becomes secondary compared to the impact of RCP4.5 forcing after 6 1/2 and 8 years in the two basins, respectively. Model-to-model and spatial variations in these limits are, however, substantial.

  17. Assimilation of GOES satellite-based convective initiation and cloud growth observations into the Rapid Refresh and HRRR systems to improve aviation forecast guidance

    NASA Astrophysics Data System (ADS)

    Mecikalski, John; Smith, Tracy; Weygandt, Stephen

    2014-05-01

    Latent heating profiles derived from GOES satellite-based cloud-top cooling rates are being assimilated into a retrospective version of the Rapid Refresh system (RAP) being run at the Global Systems Division. Assimilation of these data may help reduce the time lag for convection initiation (CI) in both the RAP model forecasts and in 3-km High Resolution Rapid Refresh (HRRR) model runs that are initialized off of the RAP model grids. These data may also improve both the location and organization of developing convective storm clusters, especially in the nested HRRR runs. These types of improvements are critical for providing better convective storm guidance around busy hub airports and aviation corridor routes, especially in the highly congested Ohio Valley - Northeast - Mid-Atlantic region. Additional work is focusing on assimilating GOES-R CI algorithm cloud-top cooling-based latent heating profiles directly into the HRRR model. Because of the small-scale nature of the convective phenomena depicted in the cloud-top cooling rate data (on the order of 1-4 km scale), direct assimilation of these data in the HRRR may be more effective than assimilation in the RAP. The RAP is an hourly assimilation system developed at NOAA/ESRL and was implemented at NCEP as a NOAA operational model in May 2012. The 3-km HRRR runs hourly out to 15 hours as a nest within the ESRL real-time experimental RAP. The RAP and HRRR both use the WRF ARW model core, and the Gridpoint Statistical Interpolation (GSI) is used within an hourly cycle to assimilate a wide variety of observations (including radar data) to initialize the RAP. Within this modeling framework, the cloud-top cooling rate-based latent heating profiles are applied as prescribed heating during the diabatic forward model integration part of the RAP digital filter initialization (DFI). No digital filtering is applied on the 3-km HRRR grid, but similar forward model integration with prescribed heating is used to assimilate information from radar reflectivity, lightning flash density and the satellite based cloud-top cooling rate data. In the current HRRR configuration, 4 15-min cycles of latent heating are applied during a pre-forecast hour of integration. This is followed by a final application of GSI at 3-km to fit the latest conventional observation data. At the conference, results from a 5-day retrospective period (July 5-10, 2012) will be shown, focusing on assessment of data impact for both the RAP and HRRR, as well as the sensitivity to various assimilation parameters, including assumed heating strength. Emphasis will be given to documenting the forecast impacts for aviation applications in the Eastern U.S.

  18. Evaluation of operational online-coupled regional air quality models over Europe and North America in the context of AQMEII phase 2. Part II: Particulate Matter

    EPA Science Inventory

    The second phase of the Air Quality Model Evaluation International Initiative (AQMEII) brought together seventeen modeling groups from Europe and North America, running eight operational online-coupled air quality models over Europe and North America on common emissions and bound...

  19. Evaluation of operational online-coupled regional air quality models over Europe and North America in the context of AQMEII phase 2. Part 1: Ozone”

    EPA Science Inventory

    The second phase of the Air Quality Model Evaluation International Initiative (AQMEII) brought together sixteen modeling groups from Europe and North America, running eight operational online-coupled air quality models over Europe and North America on common emissions and boundar...

  20. On the low pressure shock initiation of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine based plastic bonded explosives

    NASA Astrophysics Data System (ADS)

    Vandersall, Kevin S.; Tarver, Craig M.; Garcia, Frank; Chidester, Steven K.

    2010-05-01

    In large explosive and propellant charges, relatively low shock pressures on the order of 1-2 GPa impacting large volumes and lasting tens of microseconds can cause shock initiation of detonation. The pressure buildup process requires several centimeters of shock propagation before shock to detonation transition occurs. In this paper, experimentally measured run distances to detonation for lower input shock pressures are shown to be much longer than predicted by extrapolation of high shock pressure data. Run distance to detonation and embedded manganin gauge pressure histories are measured using large diameter charges of six octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) based plastic bonded explosives (PBX's): PBX 9404; LX-04; LX-07; LX-10; PBX 9501; and EDC37. The embedded gauge records show that the lower shock pressures create fewer and less energetic "hot spot" reaction sites, which consume the surrounding explosive particles at reduced reaction rates and cause longer distances to detonation. The experimental data is analyzed using the ignition and growth reactive flow model of shock initiation in solid explosives. Using minimum values of the degrees of compression required to ignite hot spot reactions, the previously determined high shock pressure ignition and growth model parameters for the six explosives accurately simulate the much longer run distances to detonation and much slower growths of pressure behind the shock fronts measured during the shock initiation of HMX PBX's at several low shock pressures.

  1. The joy of interactive modeling

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert

    2013-04-01

    The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.

  2. The Met Office Coupled Atmosphere/Land/Ocean/Sea-Ice Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Mirouze, Isabelle; Martin, Matthew; Hines, Adrian; Guiavarch, Catherine; Shelly, Ann

    2014-05-01

    The Met Office has developed a weakly-coupled data assimilation (DA) system using the global coupled model HADGEM3 (Hadley Centre Global Environment Model, version 3). This model combines the atmospheric model UM (Unified Model) at 60 km horizontal resolution on 85 vertical levels, the ocean model NEMO (Nucleus for European Modeling of the Ocean) at 25 km (at the equator) horizontal resolution on 75 vertical levels, and the sea-ice model CICE at the same resolution as NEMO. The atmosphere and the ocean/sea-ice fields are coupled every 1-hour using the OASIS coupler. The coupled model is corrected using two separate 6-hour window data assimilation systems: a 4D-Var for the atmosphere with associated soil moisture content nudging and snow analysis schemes on the one hand, and a 3D-Var FGAT for the ocean and sea-ice on the other hand. The background information in the DA systems comes from a previous 6-hour forecast of the coupled model. To show the impact of coupled DA, one-month experiments have been carried out, including 1) a full atmosphere/land/ocean/sea-ice coupled DA run, 2) an atmosphere-only run forced by OSTIA SSTs and sea-ice with atmosphere and land DA, and 3) an ocean-only run forced by atmospheric fields from run 2 with ocean and sea-ice DA. In addition, 5-day forecast runs, started twice a day, have been produced from initial conditions generated by either run 1 or a combination of runs 2 and 3. The different results have been compared to each other and, whenever possible, to other references such as the Met Office atmosphere and ocean operational analyses or the OSTIA data. These all show the coupled DA system functioning well. Evidence of imbalances and initialisation shocks has also been looked for.

  3. Is Vacation Apprenticeship of Undergraduate Life Science Students a Model for Human Capacity Development in the Life Sciences?

    ERIC Educational Resources Information Center

    Downs, Colleen Thelma

    2010-01-01

    A life sciences undergraduate apprenticeship initiative was run during the vacations at a South African university. In particular, the initiative aimed to increase the number of students from disadvantaged backgrounds. Annually 12-18 undergraduate biology students were apprenticed to various institutions during the January and July vacations from…

  4. 42 CFR § 512.307 - Subsequent calculations.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.307... the initial NPRA, using claims data and non-claims-based payment data available at that time, to account for final claims run-out, final changes in non-claims-based payment data, and any additional...

  5. Feedbacks between Air Pollution and Weather, Part 2: Effects on Chemistry.

    EPA Science Inventory

    Fully-coupled air-quality models running in “feedback” and “no-feedback” configurations were compared against each other and observation network data as part of Phase 2 of the Air Quality Model Evaluation International Initiative. In the “no-feedback” mode, interactions between m...

  6. Kinetics of Eucalyptus globulus delignification in a methanol-water medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilarranz, M.A.; Rodriguez, F.; Santos, A.

    1999-09-01

    The kinetics of Eucalyptus Globulus delignification in methanol-water pulping has been studied. A total of 17 isothermal runs at a liquor-to-wood ratio of 50 L/kg were carried out to develop the kinetic model describing the system. In a first series of experiments, eight models were considered to study the influence of temperature on the delignification rate. The most suitable model, which was discriminated according to statistical criteria, describes delignification as the consecutive dissolution of three lignin species: initial, bulk, and residual lignin, their content in wood being 10, 69, and 21%, respectively. Initial and residual delignification were considered as irreversiblemore » reactions and bulk delignification as reversible. The influence of hydrogen ion concentration was taken into account by means of a general power-law expression. The model proposed was taken into account by means of a general power-law expression. The model proposed was validated by reproducing the experimental data from four runs carried out under nonisothermal conditions and a liquor-to-wood ratio of 7 L/kg, which are closer to industrial operating conditions.« less

  7. On the use of tower-flux measurements to assess the performance of global ecosystem models

    NASA Astrophysics Data System (ADS)

    El Maayar, M.; Kucharik, C.

    2003-04-01

    Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.

  8. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  9. Assessing the debris flow run-out frequency of a catchment in the French Alps using a parameterization analysis with the RAMMS numerical run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, H. Y.; Luna, B. Quan; van Westen, C. J.; Christen, M.; Malet, J.-P.; van Asch, Th. W. J.

    2012-04-01

    Debris flows occurring in the European Alps frequently cause significant damage to settlements, power-lines and transportation infrastructure which has led to traffic disruptions, economic loss and even death. Estimating the debris flow run-out extent and the parameter uncertainty related to run-out modeling are some of the difficulties found in the Quantitative Risk Assessment (QRA) of debris flows. Also, the process of the entrainment of material into a debris flow is until now not completely understood. Debris flows observed in the French Alps entrain 5 - 50 times the amount of volume compared to the initially mobilized source volume. In this study we analyze a debris flow that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps). The analysis was carried out using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2D numerical modeling software. The historic event was back calibrated based on source, entrainment and deposit volumes, including the run-out distance, velocities and deposit heights of the debris flow. This was then followed by a sensitivity analysis of the rheological and entrainment parameters to produce 120 debris flow scenarios leading to a frequency assessment of the run-out distance and deposit height at the debris fan. The study shows that the Voellmy frictional parameters mainly influence the run-out distance and velocity of the flow, while the entrainment parameter has a major impact on the debris flow height. The frequency assessment of the 120 simulated scenarios further gives an indication on the most likely debris flow run-out extents and heights for this catchment. Such an assessment can be an important link between the rheological model parameters and the spatial probability of the run-out for the Quantitative Risk Assessment (QRA) of debris flows.

  10. Two Blades-Up Runs Using the JetStream Navitus Atherectomy Device Achieve Optimal Tissue Debulking of Nonocclusive In-Stent Restenosis: Observations From a Porcine Stent/Balloon Injury Model.

    PubMed

    Shammas, Nicolas W; Aasen, Nicole; Bailey, Lynn; Budrewicz, Jay; Farago, Trent; Jarvis, Gary

    2015-08-01

    To determine the number of runs with blades up (BU) using the JetStream Navitus to achieving optimal debulking in a porcine model of femoropopliteal artery in-stent restenosis (ISR). In this porcine model, 8 limbs were implanted with overlapping nitinol self-expanding stents. ISR was treated initially with 2 blades-down (BD) runs followed by 4 BU runs (BU1 to BU4). Quantitative vascular angiography (QVA) was performed at baseline, after 2 BD runs, and after each BU run. Plaque surface area and percent stenosis within the treated stented segment were measured. Intravascular ultrasound (IVUS) was used to measure minimum lumen area (MLA) and determine IVUS-derived plaque surface area. QVA showed that plaque surface area was significantly reduced between baseline (83.9%±14.8%) and 2 BD (67.7%±17.0%, p=0.005) and BU1 (55.4%±9.0%, p=0.005) runs, and between BU1 and BU2 runs (50.7%±9.7%, p<0.05). Percent stenosis behaved similarly with no further reduction after BU2. There were no further reductions in plaque surface area or percent stenosis with BU 3 and 4 runs (p=0.10). Similarly, IVUS (24 lesions) confirmed optimal results with BU2 runs and no additional gain in MLA or reduction in plaque surface area with BU3 and 4. IVUS confirmed no orbital cutting with JetStream Navitus. There were no stent strut discontinuities on high-resolution radiographs following atherectomy. JetStream Navitus achieved optimal tissue debulking after 2 BD and 2 BU runs with no further statistical gain in debulking after the BU2 run. Operators treating ISR with JetStream Navitus may be advised to limit their debulking to 2 BD and 2 BU runs to achieve optimal debulking. © The Author(s) 2015.

  11. Impact of Lake Okeechobee Sea Surface Temperatures on Numerical Predictions of Summertime Convective Systems over South Florida

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Splitt, Michael E.; Fuell, Kevin K.; Santos, Pablo; Lazarus, Steven M.; Jedlovec, Gary J.

    2009-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center, the Florida Institute of Technology, and the NOAA/NWS Weather Forecast Office at Miami, FL (MFL) are collaborating on a project to investigate the impact of using high-resolution, 2-km Moderate Resolution Imaging Spectroradiometer (MODIS) sea surface temperature (SST) composites within the Weather Research and Forecasting (WRF) prediction system. The NWS MFL is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run daily initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution. The project objective is to determine whether more accurate specification of the lower-boundary forcing over water using the MODIS SST composites within the 4-km WRF runs will result in improved sea fluxes and hence, more accurate e\\olutiono f coastal mesoscale circulations and the associated sensible weather elements. SPoRT conducted parallel WRF EMS runs from February to August 2007 identical to the operational runs at NWS MFL except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. During the course of this evaluation, an intriguing case was examined from 6 May 2007, in which lake breezes and convection around Lake Okeechobee evolved quite differently when using the high-resolution SPoRT MODIS SST composites versus the lower-resolution RTG SSTs. This paper will analyze the differences in the 6 May simulations, as well as examine other cases from the summer 2007 in which the WRF-simulated Lake Okeechobee breezes evolved differently due to the SST initialization. The effects on wind fields and precipitation systems will be emphasized, including validation against surface mesonet observations and Stage IV precipitation grids.

  12. Modeling surface-water flow and sediment mobility with the Multi-Dimensional Surface-Water Modeling System (MD_SWMS)

    USGS Publications Warehouse

    McDonald, Richard; Nelson, Jonathan; Kinzel, Paul; Conaway, Jeffrey S.

    2006-01-01

    The Multi-Dimensional Surface-Water Modeling System (MD_SWMS) is a Graphical User Interface for surface-water flow and sediment-transport models. The capabilities of MD_SWMS for developing models include: importing raw topography and other ancillary data; building the numerical grid and defining initial and boundary conditions; running simulations; visualizing results; and comparing results with measured data.

  13. Modeling the bloom evolution and carbon flows during SOIREE: Implications for future in situ iron-enrichments in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Hannon, E.; Boyd, P. W.; Silvoso, M.; Lancelot, C.

    The impact of a mesoscale in situ iron-enrichment experiment (SOIREE) on the planktonic ecosystem and biological pump in the Australasian-Pacific sector of the Southern Ocean was investigated through model simulations over a period of 60-d following an initial iron infusion. For this purpose we used a revised version of the biogeochemical SWAMCO model ( Lancelot et al., 2000), which describes the cycling of C, N, P, Si, Fe through aggregated chemical and biological components of the planktonic ecosystem in the high nitrate low chlorophyll (HNLC) waters of the Southern Ocean. Model runs were conducted for both the iron-fertilized waters and the surrounding HNLC waters, using in situ meteorological forcing. Validation was performed by comparing model predictions with observations recorded during the 13-d site occupation of SOIREE. Considerable agreement was found for the magnitude and temporal trends in most chemical and biological variables (the microbial food web excepted). Comparison of simulations run for 13- and 60-d showed that the effects of iron fertilization on the biota were incomplete over the 13-d monitoring of the SOIREE bloom. The model results indicate that after the vessel departed the SOIREE site there were further iron-mediated increases in properties such as phytoplankton biomass, production, export production, and uptake of atmospheric CO 2, which peaked 20-30 days after the initial iron infusion. Based on model simulations, the increase in net carbon production at the scale of the fertilized patch (assuming an area of 150 km2) was estimated to 9725 t C by day 60. Much of this production accumulated in the upper ocean, so that the predicted downward export of particulate organic carbon (POC) only represented 22% of the accumulated C in the upper ocean. Further model runs that implemented improved parameterization of diatom sedimentation (i.e. including iron-mediated diatom sinking rate, diatom chain-forming and aggregation) suggested that the downward POC flux predicted by the standard run might have been underestimated by a factor of up to 3. Finally, a sensitivity analysis of the biological response to iron-enrichment at locales with different initial oceanographic conditions (such as mixed-layer depth) or using different iron fertilization strategies (single vs. pulsed additions) was conducted. The outcomes of this analysis offer insights in the design and location of future in situ iron-enrichments.

  14. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  15. Contingency discriminability and the generalized matching law describe choice on concurrent ratio schedules of wheel-running reinforcement.

    PubMed

    Belke, Terry W

    2012-07-01

    Belke (2010) showed that on concurrent ratio schedules, the difference in ratio requirements required to produce near exclusive preference for the lower ratio alternative was substantively greater when the reinforcer was wheel running than when it was sucrose. The current study replicated this finding and showed that this choice behavior can be described by the matching law and the contingency discriminability model. Eight female Long Evans rats were exposed to concurrent VR schedules of wheel-running reinforcement (30s) and the schedule value of the initially preferred alternative was systematically increased. Two rats rapidly developed exclusive preference for the lower ratio alternative, but the majority did not - even when ratios differed by 20:1. Analysis showed that estimates of slopes from the matching law and the proportion of reinforcers misattributed from the contingency discriminability model were related to the ratios at which near exclusive preference developed. The fit of these models would be consistent with misattribution of reinforcers or poor discrimination between alternatives due to the long duration of wheel running. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.

    PubMed

    Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P

    2017-09-01

    The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Seasonal streamflow prediction using ensemble streamflow prediction technique for the Rangitata and Waitaki River basins on the South Island of New Zealand

    NASA Astrophysics Data System (ADS)

    Singh, Shailesh Kumar

    2014-05-01

    Streamflow forecasts are essential for making critical decision for optimal allocation of water supplies for various demands that include irrigation for agriculture, habitat for fisheries, hydropower production and flood warning. The major objective of this study is to explore the Ensemble Streamflow Prediction (ESP) based forecast in New Zealand catchments and to highlights the present capability of seasonal flow forecasting of National Institute of Water and Atmospheric Research (NIWA). In this study a probabilistic forecast framework for ESP is presented. The basic assumption in ESP is that future weather pattern were experienced historically. Hence, past forcing data can be used with current initial condition to generate an ensemble of prediction. Small differences in initial conditions can result in large difference in the forecast. The initial state of catchment can be obtained by continuously running the model till current time and use this initial state with past forcing data to generate ensemble of flow for future. The approach taken here is to run TopNet hydrological models with a range of past forcing data (precipitation, temperature etc.) with current initial conditions. The collection of runs is called the ensemble. ESP give probabilistic forecasts for flow. From ensemble members the probability distributions can be derived. The probability distributions capture part of the intrinsic uncertainty in weather or climate. An ensemble stream flow prediction which provide probabilistic hydrological forecast with lead time up to 3 months is presented for Rangitata, Ahuriri, and Hooker and Jollie rivers in South Island of New Zealand. ESP based seasonal forecast have better skill than climatology. This system can provide better over all information for holistic water resource management.

  18. The Impact of TRMM on Mesoscale Model Simulation of Super Typhoon Paka

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Jia, Y.; Halverson, J.; Hou, A.; Olson, W.; Rodgers, E.; Simpson, J.

    1999-01-01

    Tropical cyclone Paka formed during the first week of December 1997 and underwent three periods of rapid intensification over the following two weeks. During one of these periods, which initiated early on December 10, Paka's Dvorak-measured windspeed increased from 23 to 60 m/s over a 48-hr period. On December 18, during the last rapid deepening episode, Paka became a supertyphoon with a maximum wind speed of about 80 m/s. In this study, the Penn State/NCAR Mesoscale Model (MM5) with improved physics (i.e., cloud microphysics, radiation, land-soil-vegetation-surface processes, and TOGA COARE flux scheme) and a multiple level nesting technique (135, 45 and 15 km horizontal resolution) will be used to simulate supertyphoon Paka. We performed two runs initialized with Goddard Earth Observing System (GEOS) data sets. The first GEOS data set does not incorporate either TRMM (tropical rainfall measuring mission satellite) or SSM/I (sensor microwave imager) observed rainfall fields into the GEOS's assimilation system while the second one does. Preliminary results show that the MM5 simulated surface pressure deepened by more than 25 mb (45 km resolution domain) in the run initialized with the GEOS data set incorporating TRMM and SSM/I derived rainfall, compared to the one initialized without. However, the track and precipitation patterns are quite similar between the runs. In our presentation, we will show the impact of TRMM rainfall upon the MM5 simulation of Paka at various horizontal resolutions. We will also examine the physical processes associated with initial explosive development by comparing MM5 simulated rainfall and latent heat release. In addition, budget (vorticity, PV, momentum and heat) calculations and sensitivity tests will be performed to examine the upper-tropospheric and SST mechanisms responsible for the explosive development of Paka.

  19. Impact of assimilation of INSAT cloud motion vector (CMV) wind for the prediction of a monsoon depression over Indian Ocean using a mesoscale model

    NASA Astrophysics Data System (ADS)

    Xavier, V. F.; Chandrasekar, A.; Singh, Devendra

    2006-12-01

    The present study utilized the Penn State/NCAR mesoscale model (MM5), to assimilate the INSAT-CMV (Indian National Satellite System-Cloud Motion Vector) wind observations using analysis nudging to improve the prediction of a monsoon depression which occurred over the Arabian Sea, India during 14 September 2005 to 17 September 2005. NCEP-FNL analysis has been utilized as the initial and lateral boundary conditions and two sets of numerical experiments were designed to reveal the impact of assimilation of satellite-derived winds. The model was integrated from 14 September 2005 00 UTC to 17 September 2005 00 UTC, with just the NCEP FNL analysis in the NOFDDA run. In the FDDA run, the NCEP FNL analysis fields were improved by assimilating the INSAT-CMV (wind speed and wind direction) as well as QuickSCAT sea surface winds during the 24 hour pre-forecast period (14 September 2005 00 UTC to 15 September 2005 00 UTC) using analysis nudging. The model was subsequently run in the free forecast mode from 15 September 2005 00 UTC to 17 September 2005 12 UTC. The simulated sea level pressure field from the NOFDDA run reveals a relatively stronger system as compared to the FDDA run. However, the sea level pressure fields corresponding to the FDDA run are closer to the analysis. The simulated lower tropospheric winds from both experiments reveal a well-developed cyclonic circulation as compared to the analysis.

  20. Full-field initialized decadal predictions with the MPI earth system model: an initial shock in the North Atlantic

    NASA Astrophysics Data System (ADS)

    Kröger, Jürgen; Pohlmann, Holger; Sienz, Frank; Marotzke, Jochem; Baehr, Johanna; Köhl, Armin; Modali, Kameswarrao; Polkova, Iuliia; Stammer, Detlef; Vamborg, Freja S. E.; Müller, Wolfgang A.

    2017-12-01

    Our decadal climate prediction system, which is based on the Max-Planck-Institute Earth System Model, is initialized from a coupled assimilation run that utilizes nudging to selected state parameters from reanalyses. We apply full-field nudging in the atmosphere and either full-field or anomaly nudging in the ocean. Full fields from two different ocean reanalyses are considered. This comparison of initialization strategies focuses on the North Atlantic Subpolar Gyre (SPG) region, where the transition from anomaly to full-field nudging reveals large differences in prediction skill for sea surface temperature and ocean heat content (OHC). We show that nudging of temperature and salinity in the ocean modifies OHC and also induces changes in mass and heat transports associated with the ocean flow. In the SPG region, the assimilated OHC signal resembles well OHC from observations, regardless of using full fields or anomalies. The resulting ocean transport, on the other hand, reveals considerable differences between full-field and anomaly nudging. In all assimilation runs, ocean heat transport together with net heat exchange at the surface does not correspond to OHC tendencies, the SPG heat budget is not closed. Discrepancies in the budget in the cases of full-field nudging exceed those in the case of anomaly nudging by a factor of 2-3. The nudging-induced changes in ocean transport continue to be present in the free running hindcasts for up to 5 years, a clear expression of memory in our coupled system. In hindcast mode, on annual to inter-annual scales, ocean heat transport is the dominant driver of SPG OHC. Thus, we ascribe a significant reduction in OHC prediction skill when using full-field instead of anomaly initialization to an initialization shock resulting from the poor initialization of the ocean flow.

  1. Using ARM Observations to Evaluate Climate Model Representation of Land-Atmosphere Coupling on the U.S. Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Phillips, T. J.; Klein, S. A.; Ma, H. Y.; Tang, Q.

    2016-12-01

    Statistically significant coupling between summertime soil moisture and various atmospheric variables has been observed at the U.S. Southern Great Plains (SGP) facilities maintained by the U.S. DOE Atmospheric Radiation Measurement (ARM) program (Phillips and Klein, 2014 JGR). In the current study, we employ several independent measurements of shallow-depth soil moisture (SM) and of the surface evaporative fraction (EF) over multiple summers in order to estimate the range of SM-EF coupling strength at seven sites, and to approximate the SGP regional-scale coupling strength (and its uncertainty). We will use this estimate of regional-scale SM-EF coupling strength to evaluate its representation in version 5.1 of the global Community Atmosphere Model (CAM5.1) coupled to the CLM4 Land Model. Two experimental cases are considered for the 2003-2011 study period: 1) an Atmospheric Model Intercomparison Project (AMIP) run with historically observed sea surface temperatures specified, and 2) a more constrained hindcast run in which the CAM5.1 atmospheric state is initialized each day from the ERA Interim reanalysis, while the CLM4 initial conditions are obtained from an offline run of the land model using observed surface net radiation, precipitation, and wind as forcings. These twin experimental cases allow a distinction to be drawn between the land-atmosphere coupling in the free-running CAM5.1/CLM4 model and that in which the land and atmospheric states are constrained to remain closer to "reality". The constrained hindcast case, for example, should allow model errors in coupling strength to be related more closely to potential deficiencies in land-surface or atmospheric boundary-layer parameterizations. AcknowledgmentsThis work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Development of the CELSS emulator at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Cullingford, Hatice S.

    1990-01-01

    The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.

  3. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    PubMed

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  4. Governing Laws of Complex System Predictability under Co-evolving Uncertainty Sources: Theory and Nonlinear Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.

    2017-12-01

    Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.

  5. How to reduce long-term drift in present-day and deep-time simulations?

    NASA Astrophysics Data System (ADS)

    Brunetti, Maura; Vérard, Christian

    2018-06-01

    Climate models are often affected by long-term drift that is revealed by the evolution of global variables such as the ocean temperature or the surface air temperature. This spurious trend reduces the fidelity to initial conditions and has a great influence on the equilibrium climate after long simulation times. Useful insight on the nature of the climate drift can be obtained using two global metrics, i.e. the energy imbalance at the top of the atmosphere and at the ocean surface. The former is an indicator of the limitations within a given climate model, at the level of both numerical implementation and physical parameterisations, while the latter is an indicator of the goodness of the tuning procedure. Using the MIT general circulation model, we construct different configurations with various degree of complexity (i.e. different parameterisations for the bulk cloud albedo, inclusion or not of friction heating, different bathymetry configurations) to which we apply the same tuning procedure in order to obtain control runs for fixed external forcing where the climate drift is minimised. We find that the interplay between tuning procedure and different configurations of the same climate model provides crucial information on the stability of the control runs and on the goodness of a given parameterisation. This approach is particularly relevant for constructing good-quality control runs of the geological past where huge uncertainties are found in both initial and boundary conditions. We will focus on robust results that can be generally applied to other climate models.

  6. Comprehensive Assessment of Models and Events based on Library tools (CAMEL)

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; Boblitt, J. M.; DeZeeuw, D.; Mays, M. L.; Kuznetsova, M. M.; Wiegand, C.

    2017-12-01

    At the Community Coordinated Modeling Center (CCMC), the assessment of modeling skill using a library of model-data comparison metrics is taken to the next level by fully integrating the ability to request a series of runs with the same model parameters for a list of events. The CAMEL framework initiates and runs a series of selected, pre-defined simulation settings for participating models (e.g., WSA-ENLIL, SWMF-SC+IH for the heliosphere, SWMF-GM, OpenGGCM, LFM, GUMICS for the magnetosphere) and performs post-processing using existing tools for a host of different output parameters. The framework compares the resulting time series data with respective observational data and computes a suite of metrics such as Prediction Efficiency, Root Mean Square Error, Probability of Detection, Probability of False Detection, Heidke Skill Score for each model-data pair. The system then plots scores by event and aggregated over all events for all participating models and run settings. We are building on past experiences with model-data comparisons of magnetosphere and ionosphere model outputs in GEM2008, GEM-CEDAR CETI2010 and Operational Space Weather Model challenges (2010-2013). We can apply the framework also to solar-heliosphere as well as radiation belt models. The CAMEL framework takes advantage of model simulations described with Space Physics Archive Search and Extract (SPASE) metadata and a database backend design developed for a next-generation Run-on-Request system at the CCMC.

  7. Development of technology for modeling of a 1/8-scale dynamic model of the shuttle Solid Rocket Booster (SRB)

    NASA Technical Reports Server (NTRS)

    Levy, A.; Zalesak, J.; Bernstein, M.; Mason, P. W.

    1974-01-01

    A NASTRAN analysis of the solid rocket booster (SRB) substructure of the space shuttle 1/8-scale structural dynamics model. The NASTRAN finite element modeling capability was first used to formulate a model of a cylinder 10 in. radius by a 200 in. length to investigate the accuracy and adequacy of the proposed grid point spacing. Results were compared with a shell analysis and demonstrated relatively accurate results for NASTRAN for the lower modes, which were of primary interest. A finite element model of the full SRB was then formed using CQUAD2 plate elements containing membrane and bending stiffness and CBAR offset bar elements to represent the longerons and frames. Three layers of three-dimensional CHEXAI elements were used to model the propellant. This model, consisting of 4000 degrees of freedom (DOF) initially, was reduced to 176 DOF using Guyan reduction. The model was then submitted for complex Eigenvalue analysis. After experiencing considerable difficulty with attempts to run the complete model, it was split into two substructres. These were run separately and combined into a single 116 degree of freedom A set which was successfully run. Results are reported.

  8. Improved prediction of severe thunderstorms over the Indian Monsoon region using high-resolution soil moisture and temperature initialization

    PubMed Central

    Osuri, K. K.; Nadimpalli, R.; Mohanty, U. C.; Chen, F.; Rajeevan, M.; Niyogi, D.

    2017-01-01

    The hypothesis that realistic land conditions such as soil moisture/soil temperature (SM/ST) can significantly improve the modeling of mesoscale deep convection is tested over the Indian monsoon region (IMR). A high resolution (3 km foot print) SM/ST dataset prepared from a land data assimilation system, as part of a national monsoon mission project, showed close agreement with observations. Experiments are conducted with (LDAS) and without (CNTL) initialization of SM/ST dataset. Results highlight the significance of realistic land surface conditions on numerical prediction of initiation, movement and timing of severe thunderstorms as compared to that currently being initialized by climatological fields in CNTL run. Realistic land conditions improved mass flux, convective updrafts and diabatic heating in the boundary layer that contributed to low level positive potential vorticity. The LDAS run reproduced reflectivity echoes and associated rainfall bands more efficiently. Improper representation of surface conditions in CNTL run limit the evolution boundary layer processes and thereby failed to simulate convection at right time and place. These findings thus provide strong support to the role land conditions play in impacting the deep convection over the IMR. These findings also have direct implications for improving heavy rain forecasting over the IMR, by developing realistic land conditions. PMID:28128293

  9. Elastic energy within the human plantar aponeurosis contributes to arch shortening during the push-off phase of running.

    PubMed

    Wager, Justin C; Challis, John H

    2016-03-21

    During locomotion, the lower limb tendons undergo stretch and recoil, functioning like springs that recycle energy with each step. Cadaveric testing has demonstrated that the arch of the foot operates in this capacity during simple loading, yet it remains unclear whether this function exists during locomotion. In this study, one of the arch׳s passive elastic tissues (the plantar aponeurosis; PA) was investigated to glean insights about it and the entire arch of the foot during running. Subject specific computer models of the foot were driven using the kinematics of eight subjects running at 3.1m/s using two initial contact patterns (rearfoot and non-rearfoot). These models were used to estimate PA strain, force, and elastic energy storage during the stance phase. To examine the release of stored energy, the foot joint moments, powers, and work created by the PA were computed. Mean elastic energy stored in the PA was 3.1±1.6J, which was comparable to in situ testing values. Changes to the initial contact pattern did not change elastic energy storage or late stance PA function, but did alter PA pre-tensioning and function during early stance. In both initial contact patterns conditions, the PA power was positive during late stance, which reveals that the release of the stored elastic energy assists with shortening of the arch during push-off. As the PA is just one of the arch׳s passive elastic tissues, the entire arch may store additional energy and impact the metabolic cost of running. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Potential Vorticity Analysis of Low Level Thunderstorm Dynamics in an Idealized Supercell Simulation

    DTIC Science & Technology

    2009-03-01

    Severe Weather, Supercell, Weather Research and Forecasting Model , Advanced WRF 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...27 A. ADVANCED RESEARCH WRF MODEL .................................................27 1. Data, Model Setup, and Methodology...03/11/2006 GFS model run. Top row: 11/12Z initialization. Middle row: 12 hour forecast valid at 12/00Z. Bottom row: 24 hour forecast valid at

  11. Statistical Properties of Differences between Low and High Resolution CMAQ Runs with Matched Initial and Boundary Conditions

    EPA Science Inventory

    The difficulty in assessing errors in numerical models of air quality is a major obstacle to improving their ability to predict and retrospectively map air quality. In this paper, using simulation outputs from the Community Multi-scale Air Quality Model (CMAQ), the statistic...

  12. Weather Research and Forecasting Model Sensitivity Comparisons for Warm Season Convective Initiation

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Hoeth, Brian; Blottman, Peter F.

    2007-01-01

    Mesoscale weather conditions can significantly affect the space launch and landing operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). During the summer months, land-sea interactions that occur across KSC and CCAFS lead to the formation of a sea breeze, which can then spawn deep convection. These convective processes often last 60 minutes or less and pose a significant challenge to the forecasters at the National Weather Service (NWS) Spaceflight Meteorology Group (SMG). The main challenge is that a "GO" forecast for thunderstorms and precipitation is required at the 90 minute deorbit decision for End Of Mission (EOM) and at the 30 minute Return To Launch Site (RTLS) decision at the Shuttle Landing Facility. Convective initiation, timing, and mode also present a forecast challenge for the NWS in Melbourne, FL (MLB). The NWS MLB issues such tactical forecast information as Terminal Aerodrome Forecasts (TAFs), Spot Forecasts for fire weather and hazardous materials incident support, and severe/hazardous weather Watches, Warnings, and Advisories. Lastly, these forecasting challenges can also affect the 45th Weather Squadron (45 WS), which provides comprehensive weather forecasts for shuttle launch, as well as ground operations, at KSC and CCAFS. The need for accurate mesoscale model forecasts to aid in their decision making is crucial. Both the SMG and the MLB are currently implementing the Weather Research and Forecasting Environmental Modeling System (WRF EMS) software into their operations. The WRF EMS software allows users to employ both dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model- the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, provides SMG and NWS MLB with a lot of flexibility. It also creates challenges, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and to determine which configuration will best predict warm season convective initiation in East-Central Florida. Four different combinations of WRF initializations will be run (ADAS-ARW, ADAS-NMM, LAPS-ARW, and LAPS-NMM) at a 4-km resolution over the Florida peninsula and adjacent coastal waters. Five candidate convective initiation days using three different flow regimes over East-Central Florida will be examined, as well as two null cases (non-convection days). Each model run will be integrated 12 hours with three runs per day, at 0900, 1200, and 1500 UTe. ADAS analyses will be generated every 30 minutes using Level II Weather Surveillance Radar-1988 Doppler (WSR-88D) data from all Florida radars to verify the convection forecast. These analyses will be run on the same domain as the four model configurations. To quantify model performance, model output will be subjectively compared to the ADAS analyses of convection to determine forecast accuracy. In addition, a subjective comparison of the performance of the ARW using a high-resolution local grid with 2-way nesting, I-way nesting, and no nesting will be made for select convective initiation cases. The inner grid will cover the East-Central Florida region at a resolution of 1.33 km. The authors will summarize the relative skill of the various WRF configurations and how each configuration behaves relative to the others, as well as determine the best model configuration for predicting warm season convective initiation over East-Central Florida.

  13. A climatology and preliminary investigation of predictability of pristine nocturnal convective initiation in the central United States

    NASA Astrophysics Data System (ADS)

    Stelten, Sean; Gallus, William

    2017-04-01

    The prediction of convective initiation remains a challenge to forecasters in the central United States, especially for elevated events at night. This study examines a subset of 287 nocturnal elevated convective initiation events that occurred without direct influence from surface boundaries or pre-existing convection over a four-month period during the summer of 2015 (May, June, July, and August). Events were first classified into one of four types based on apparent formation mechanisms and location relative to any low-level jet. A climatology of each of the four types was performed focusing on general spatial tendencies over the central United States and initiation timing trends. Additionally, analysis of initiation elevation was performed. Simulations from five convection-allowing models available during the Plains Elevated Convection At Night (PECAN) field campaign, along with four versions of a 4km horizontal grid spacing Weather Research and Forecasting (WRF) model using different planetary boundary layer (PBL) parameterizations, were used to examine predictability of these types of convective initiation. The climatology revealed a dual-peak pattern for initiation timing with one peak near 0400 UTC and another 0700 UTC, and it was found that the dual peak structure was present for all four types of events, suggesting that the evolution of the low-level jet was not directly responsible for the twin peaks. Subtle differences in location and elevation of the initiation for the different types were identified. The convection-allowing models run during the PECAN project were found to be more deficient with location than timing. Threat scores typically averaged around 0.3 for the models, with false alarm ratios and hit rates both averaging around 0.5 to 0.6 for the various models. Initiation occurring within the low-level jet but far from a surface front was the one type that was occasionally missed by all five models examined. Once case for each of the four types was then simulated with four different configurations of a 4 km horizontal grid spacing WRF model. These WRF runs showed similar location errors and problems with initiating convection at a lower altitude than observed as was found from the simulations performed during PECAN. Three of the four PBL schemes behaved similarly, but one, the ACM2, was often an outlier, failing to indicate the convective initiation.

  14. Analysis of stratospheric ozone, temperature, and minor constituent data

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Jackman, Charles H.; Kaye, Jack A.; Rood, Richard B.

    1990-01-01

    The objective of this research is to use available satellite measurements of temperature and constituent concentrations to test the conceptual picture of stratospheric chemistry and transport. This was originally broken down into two sub-goals: first, to use the constituent data to search for critical tests of our understanding of stratospheric chemistry and second, to examine constituent transport processes emphasizing interactions with chemistry on various time scales. A third important goal which has evolved is to use the available solar backscattered ultraviolet (SBUV) and Total Ozone Mapping Spectrometer (TOMS) data from Nimbus 7 to describe the morphology of recent changes in Antarctic and global ozone with emphasis on searching for constraints to theories. The major effort now being pursued relative to the two original goals is our effort as a theoretical team for the Arctic Airborne Stratospheric Expedition (AASE). Our effort for the AASE is based on the 3D transport and chemistry model at Goddard. Our goal is to use this model to place the results from the mission data in a regional and global context. Specifically, we set out to make model runs starting in late December and running through March of 1989, both with and without heterogeneous chemistry. The transport is to be carried out using dynamical fields from a 4D data assimilation model being developed under separate funding from this task. We have successfully carried out a series of single constituent transport experiments. One of the things demonstrated by these runs was the difficulty in obtaining observed low N2O abundances in the vortex without simultaneously obtaining very high ozone values. Because the runs start in late December, this difficulty arises in the attempt to define consistent initial conditions for the 3D model. To accomplish a consistent set of initial conditions, we are using the 2D photochemistry-transport model of Jackman and Douglass and mapping in potential temperature, potential vorticity space as developed by Schoeberl and coworkers.

  15. EFFECT OF HEEL LIFTS ON PATELLOFEMORAL JOINT STRESS DURING RUNNING.

    PubMed

    Mestelle, Zachary; Kernozek, Thomas; Adkins, Kelly S; Miller, Jessica; Gheidi, Naghmeh

    2017-10-01

    Patellofemoral pain is a debilitating injury for many recreational runners. Excessive patellofemoral joint stress may be the underlying source of pain and interventions often focus on ways to reduce patellofemoral joint stress. Heel lifts have been used as an intervention within Achilles tendon rehabilitation programs and to address leg length discrepancies. The purpose of this study was to examine the effect of running with heel lifts on patellofemoral joint stress, patellofemoral stress impulse, quadriceps force, step length, cadence, and other related kinematic and spatiotemporal variables. A repeated-measures research design. Sixteen healthy female runners completed five running trials in a controlled laboratory setting with and without 11mm heel lifts inserted in a standard running shoe. Kinetic and kinematic data were used in combination with a static optimization technique to estimate individual muscle forces. These data were inserted into a patellofemoral joint model which was used to estimate patellofemoral joint stress and other variables during running. When running with heel lifts, peak patellofemoral joint stress and patellofemoral stress impulse were reduced by a 4.2% (p=0.049) and 9.3% (p=0.002). Initial center of pressure was shifted anteriorly 9.1% when running with heel lifts (p<0.001) despite all runners utilizing a heel strike pattern. Dorsiflexion at initial contact was reduced 28% (p=0.016) when heel lifts were donned. No differences in step length and cadence (p>0.05) were shown between conditions. Heel lift use resulted in decreased patellofemoral joint stress and impulse without associated changes in step length or frequency, or other variables shown to influence patellofemoral joint stress. The center of pressure at initial contact was also more anterior using heel lifts. The use of heel lifts may have therapeutic benefits for runners with patellofemoral pain if the primary goal is to reduce patellofemoral joint stress. 3b.

  16. EFFECT OF HEEL LIFTS ON PATELLOFEMORAL JOINT STRESS DURING RUNNING

    PubMed Central

    Mestelle, Zachary; Kernozek, Thomas; Adkins, Kelly S.; Miller, Jessica; Gheidi, Naghmeh

    2017-01-01

    Background Patellofemoral pain is a debilitating injury for many recreational runners. Excessive patellofemoral joint stress may be the underlying source of pain and interventions often focus on ways to reduce patellofemoral joint stress. Purpose Heel lifts have been used as an intervention within Achilles tendon rehabilitation programs and to address leg length discrepancies. The purpose of this study was to examine the effect of running with heel lifts on patellofemoral joint stress, patellofemoral stress impulse, quadriceps force, step length, cadence, and other related kinematic and spatiotemporal variables. Study Design A repeated-measures research design Methods Sixteen healthy female runners completed five running trials in a controlled laboratory setting with and without 11mm heel lifts inserted in a standard running shoe. Kinetic and kinematic data were used in combination with a static optimization technique to estimate individual muscle forces. These data were inserted into a patellofemoral joint model which was used to estimate patellofemoral joint stress and other variables during running. Results When running with heel lifts, peak patellofemoral joint stress and patellofemoral stress impulse were reduced by a 4.2% (p=0.049) and 9.3% (p=0.002). Initial center of pressure was shifted anteriorly 9.1% when running with heel lifts (p<0.001) despite all runners utilizing a heel strike pattern. Dorsiflexion at initial contact was reduced 28% (p=0.016) when heel lifts were donned. No differences in step length and cadence (p>0.05) were shown between conditions. Conclusions Heel lift use resulted in decreased patellofemoral joint stress and impulse without associated changes in step length or frequency, or other variables shown to influence patellofemoral joint stress. The center of pressure at initial contact was also more anterior using heel lifts. The use of heel lifts may have therapeutic benefits for runners with patellofemoral pain if the primary goal is to reduce patellofemoral joint stress. Level of Evidence 3b PMID:29181248

  17. Novel application of red-light runner proneness theory within traffic microsimulation to an actual signal junction.

    PubMed

    Bell, Margaret Carol; Galatioto, Fabio; Giuffrè, Tullio; Tesoriere, Giovanni

    2012-05-01

    Building on previous research a conceptual framework, based on potential conflicts analysis, has provided a quantitative evaluation of 'proneness' to red-light running behaviour at urban signalised intersections of different geometric, flow and driver characteristics. The results provided evidence that commonly used violation rates could cause inappropriate evaluation of the extent of the red-light running phenomenon. Initially, an in-depth investigation of the functional form of the mathematical relationship between the potential and actual red-light runners was carried out. The application of the conceptual framework was tested on a signalised intersection in order to quantify the proneness to red-light running. For the particular junction studied proneness for daytime was found to be 0.17 north and 0.16 south for opposing main road approaches and 0.42 east and 0.59 west for the secondary approaches. Further investigations were carried out using a traffic microsimulation model, to explore those geometric features and traffic volumes (arrival patterns at the stop-line) that significantly affect red-light running. In this way the prediction capability of the proposed potential conflict model was improved. A degree of consistency in the measured and simulated red-light running was observed and the conceptual framework was tested through a sensitivity analysis applied to different stop-line positions and traffic volume variations. The microsimulation, although at its early stages of development, has shown promise in its ability to model unintentional red light running behaviour and following further work through application to other junctions, potentially provides a tool for evaluation of alternative junction designs on proneness. In brief, this paper proposes and applies a novel approach to model red-light running using a microsimulation and demonstrates consistency with the observed and theoretical results. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. 78 FR 61946 - Pheasant Run Wind, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2461-000] Pheasant Run Wind, LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket... Run Wind, LLC's application for market-based rate authority, with an accompanying rate schedule...

  19. Weather Research and Forecasting Model Sensitivity Comparisons for Warm Season Convective Initiation

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2007-01-01

    This report describes the work done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting warm season convection over East-Central Florida. The Weather Research and Forecasting Environmental Modeling System (WRF EMS) software allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Besides model core and initialization options, the WRF model can be run with one- or two-way nesting. Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. This project assessed three different model intializations available to determine which configuration best predicts warm season convective initiation in East-Central Florida. The project also examined the use of one- and two-way nesting in predicting warm season convection.

  20. Examination of Soil Moisture Retrieval Using SIR-C Radar Data and a Distributed Hydrological Model

    NASA Technical Reports Server (NTRS)

    Hsu, A. Y.; ONeill, P. E.; Wood, E. F.; Zion, M.

    1997-01-01

    A major objective of soil moisture-related hydrological-research during NASA's SIR-C/X-SAR mission was to determine and compare soil moisture patterns within humid watersheds using SAR data, ground-based measurements, and hydrologic modeling. Currently available soil moisture-inversion methods using active microwave data are only accurate when applied to bare and slightly vegetated surfaces. Moreover, as the surface dries down, the number of pixels that can provide estimated soil moisture by these radar inversion methods decreases, leading to less accuracy and, confidence in the retrieved soil moisture fields at the watershed scale. The impact of these errors in microwave- derived soil moisture on hydrological modeling of vegetated watersheds has yet to be addressed. In this study a coupled water and energy balance model operating within a topographic framework is used to predict surface soil moisture for both bare and vegetated areas. In the first model run, the hydrological model is initialized using a standard baseflow approach, while in the second model run, soil moisture values derived from SIR-C radar data are used for initialization. The results, which compare favorably with ground measurements, demonstrate the utility of combining radar-derived surface soil moisture information with basin-scale hydrological modeling.

  1. Statistical analysis of NWP rainfall data from Poland..

    NASA Astrophysics Data System (ADS)

    Starosta, Katarzyna; Linkowska, Joanna

    2010-05-01

    A goal of this work is to summarize the latest results of precipitation verification in Poland. In IMGW, COSMO_PL version 4.0 has been running. The model configuration is: 14 km horizontal grid spacing, initial time at 00 UTC and 12 UTC, the forecast range 72 h. The fields from the model had been verified with Polish SYNOP stations. The verification was performed using a new verification tool. For the accumulated precipitation indices FBI, POD, FAR, ETS from contingency table are calculated. In this paper the comparison of monthly and seasonal verification of 6h, 12h, 24h accumulated precipitation in 2009 is presented. Since February 2010 the model with 7 km grid spacing will be running in IMGW. The results of precipitation verification for two different models' resolution will be shown.

  2. Development of the CELSS Emulator at NASA JSC

    NASA Technical Reports Server (NTRS)

    Cullingford, Hatice S.

    1989-01-01

    The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.

  3. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  4. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  5. Can we trust climate models to realistically represent severe European windstorms?

    NASA Astrophysics Data System (ADS)

    Trzeciak, Tomasz M.; Knippertz, Peter; Pirret, Jennifer S. R.; Williams, Keith D.

    2016-06-01

    Cyclonic windstorms are one of the most important natural hazards for Europe, but robust climate projections of the position and the strength of the North Atlantic storm track are not yet possible, bearing significant risks to European societies and the (re)insurance industry. Previous studies addressing the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data show large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms, which could create compensating effects and therefore suggest higher reliability than there really is. This work aims to shed new light into this problem through a cost-effective "seamless" approach of hindcasting 20 historical severe storms with the two global climate models, ECHAM6 and GA4 configuration of the Met Office Unified Model, run in a numerical weather prediction mode using different lead times, and horizontal and vertical resolutions. These runs are then compared to re-analysis data. The main conclusions from this work are: (a) objectively identified cyclone tracks are represented satisfactorily by most hindcasts; (b) sensitivity to vertical resolution is low; (c) cyclone depth is systematically under-predicted for a coarse resolution of T63 by both climate models; (d) no systematic bias is found for the higher resolution of T127 out to about three days, demonstrating that climate models are in fact able to represent the complex dynamics of explosively deepening cyclones well, if given the correct initial conditions; (e) an analysis using a recently developed diagnostic tool based on the surface pressure tendency equation points to too weak diabatic processes, mainly latent heating, as the main source for the under-prediction in the coarse-resolution runs. Finally, an interesting implication of these results is that the too low number of deep cyclones in many free-running climate simulations may therefore be related to an insufficient number of storm-prone initial conditions. This question will be addressed in future work.

  6. Weather extremes in very large, high-resolution ensembles: the weatherathome experiment

    NASA Astrophysics Data System (ADS)

    Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.

    2011-12-01

    Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.

  7. 78 FR 61946 - Pheasant Run Wind II, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2462-000] Pheasant Run Wind II, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Run Wind II, LLC's application for market-based rate authority, with an accompanying rate schedule...

  8. Matter-antimatter asymmetry induced by a running vacuum coupling

    NASA Astrophysics Data System (ADS)

    Lima, J. A. S.; Singleton, D.

    2017-12-01

    We show that a CP-violating interaction induced by a derivative coupling between the running vacuum and a non-conserving baryon current may dynamically break CPT and trigger baryogenesis through an effective chemical potential. By assuming a non-singular class of running vacuum cosmologies which provides a complete cosmic history (from an early inflationary de Sitter stage to the present day quasi-de Sitter acceleration), it is found that an acceptable baryon asymmetry is generated for many different choices of the model parameters. It is interesting that the same ingredient (running vacuum energy density) addresses several open cosmological questions/problems: avoids the initial singularity, provides a smooth exit for primordial inflation, alleviates both the coincidence and the cosmological constant problems, and, finally, is also capable of explaining the generation of matter-antimatter asymmetry in the very early Universe.

  9. Individual Responses to a Barefoot Running Program: Insight Into Risk of Injury.

    PubMed

    Tam, Nicholas; Tucker, Ross; Astephen Wilson, Janie L

    2016-03-01

    Barefoot running is of popular interest because of its alleged benefits for runners, including reduced injury risk and increased economy of running. There is a dearth in understanding whether all runners can gain the proposed benefits of barefoot running and how barefoot running may affect long-term injury risk. The purpose of this study was to determine whether runners can achieve the proposed favorable kinematic changes and reduction in loading rate after a progressive training program that included barefoot running. It was hypothesized that not all individuals would experience a decrease in initial loading rate facilitated by increased ankle plantar flexion after a progressive barefoot running program; it was further hypothesized that relationships exist between changes in initial loading rate and sagittal ankle angle. Descriptive laboratory study. A total of 26 habitually shod runners completed an 8-week, progressively introduced barefoot running program. Pre- and postintervention barefoot and shod kinematics, electromyography, and ground-reaction force data of the lower limb were collected. Ankle and knee kinematics and kinetics, initial loading rates, spatiotemporal variables, muscle activity during preactivation, and ground contact were assessed in both conditions before and after the intervention. Individual responses were analyzed by separating runners into nonresponders, negative responders, and positive responders based on no change, increase, and decrease in barefoot initial loading rate, respectively. No biomechanical changes were found in the group after the intervention. However, condition differences did persist during both preactivation and ground contact. The positive-responder group had greater plantar flexion, increased biceps femoris and gluteus medius preactivation, and decreased rectus femoris muscle activity between testing periods. The negative responders landed in greater barefoot dorsiflexion after the intervention, and the nonresponders did not change. An overall change in ankle flexion angle was associated with a change in initial loading rate (r(2) = 0.345, P = .002) in the barefoot but not shod condition. Eight weeks of progressive barefoot running did not change overall group biomechanics, but subgroups of responders (25% of the entire group) were identified who had specific changes that reduced the initial loading rate. It appears that changes in initial loading rate are explained by changes in ankle flexion angle at initial ground contact. Uninstructed barefoot running training does not reduce initial loading rate in all runners transitioning from shod to barefoot conditions. Some factors have been identified that may assist sports medicine professionals in the evaluation and management of runners at risk of injury. Conscious instruction to runners may be required for them to acquire habitual barefoot running characteristics and to reduce risk of injury. © 2016 The Author(s).

  10. Modeling a maintenance simulation of the geosynchronous platform

    NASA Technical Reports Server (NTRS)

    Kleiner, A. F., Jr.

    1980-01-01

    A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.

  11. Smap Soil Moisture Data Assimilation for the Continental United States and Eastern Africa

    NASA Astrophysics Data System (ADS)

    Blankenship, C. B.; Case, J.; Zavodsky, B.; Crosson, W. L.

    2016-12-01

    The NASA Short-Term Prediction Research and Transition (SPoRT) Center at Marshall Space Flight Center manages near-real-time runs of the Noah Land Surface Model within the NASA Land Information System (LIS) over Continental U.S. (CONUS) and Eastern Africa domains. Soil moisture products from the CONUS model run are used by several NOAA/National Weather Service Weather Forecast Offices for flood and drought situational awareness. The baseline LIS configuration is the Noah model driven by atmospheric and combined radar/gauge precipitation analyses, and input satellite-derived real-time green vegetation fraction on a 3-km grid for the CONUS. This configuration is being enhanced by adding the assimilation of Level 2 Soil Moisture Active/Passive (SMAP) soil moisture retrievals in a parallel run beginning on 1 April 2015. Our implementation of SMAP assimilation includes a cumulative distribution function (CDF) matching approach that aggregates points with similar soil types. This method allows creation of robust CDFs with a short data record, and also permits the correction of local anomalies that may arise from poor forcing data (e.g., quality-control problems with rain gauges). Validation results using in situ soil monitoring networks in the CONUS are shown, with comparisons to the baseline SPoRT-LIS run. Initial results are also presented from a modeling run in eastern Africa, forced by Integrated Multi-satellitE Retrievals for GPM (IMERG) precipitation data. Strategies for spatial downscaling and for dealing with effective depth of the retrieval product are also discussed.

  12. Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations

    NASA Astrophysics Data System (ADS)

    Carrassi, A.; Weber, R. J. T.; Guemas, V.; Doblas-Reyes, F. J.; Asif, M.; Volpi, D.

    2014-04-01

    Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate. The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand. Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades. We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed.

  13. Changes in hippocampal theta rhythm and their correlations with speed during different phases of voluntary wheel running in rats.

    PubMed

    Li, J-Y; Kuo, T B J; Hsieh, I-T; Yang, C C H

    2012-06-28

    Hippocampal theta rhythm (4-12 Hz) can be observed during locomotor behavior, but findings on the relationship between locomotion speed and theta frequency are inconsistent if not contradictory. The inconsistency may be because of the difficulties that previous analyses and protocols have had excluding the effects of behavior training. We recorded the first or second voluntary wheel running each day, and assumed that theta frequency and activity are correlated with speed in different running phases. By simultaneously recording electroencephalography, physical activity, and wheel running speed, this experiment explored the theta oscillations during spontaneous running of the 12-h dark period. The recording was completely wireless and allowed the animal to run freely while being recorded in the wheel. Theta frequency and theta power of middle frequency were elevated before running and theta frequency, theta power of middle frequency, physical activity, and running speed maintained persistently high levels during running. The slopes of the theta frequency and theta activity (4-9.5 Hz) during the initial running were different compared to the same values during subsequent running. During the initial running, the running speed was positively correlated with theta frequency and with theta power of middle frequency. Over the 12-h dark period, the running speed did not positively correlate with theta frequency but was significantly correlated with theta power of middle frequency. Thus, theta frequency was associated with running speed only at the initiation of running. Furthermore, theta power of middle frequency was associated with speed and with physical activity during running when chronological order was not taken into consideration. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandersall, K S; Tarver, C M; Garcia, F

    Shock initiation experiments on the HMX based explosives LX-10 (95% HMX, 5% Viton by weight) and LX-07 (90% HMX, 10% Viton by weight) were performed to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive samples with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments and prior experiments on another HMX based explosive LX LX-04 (85% HMX, 15% Viton by weight) will be shown, discussed, and compared as a function of themore » binder content. This parameter set will provide additional information to ensure accurate code predictions for safety scenarios involving HMX explosives with different percent binder content additions.« less

  15. mr: A C++ library for the matching and running of the Standard Model parameters

    NASA Astrophysics Data System (ADS)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS

  16. Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium

    NASA Astrophysics Data System (ADS)

    Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.

    2015-06-01

    Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.

  17. TARANTULA 2011 in JWL++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Haylett, D; Vitello, P

    2011-10-27

    Using square zoning, the 2011 version of the kinetic package Tarantula matches cylinder data, cylinder dead zones, and cylinder failure with the same settings for the first time. The key is the use of maximum pressure rather than instantaneous pressure. Runs are at 40, 200 and 360 z/cm using JWL++ as the host model. The model also does run-to-detonation, thin-pulse initiation with a P-t curve and air gap crossing, all in cylindrical geometry. Two sizes of MSAD/LX-10/LX-17 snowballs work somewhat with these settings, but are too weak, so that divergent detonation is a challenge for the future. Butterfly meshes aremore » considered but do not appear to solve the issue.« less

  18. Using Web 2.0 Techniques To Bring Global Climate Modeling To More Users

    NASA Astrophysics Data System (ADS)

    Chandler, M. A.; Sohl, L. E.; Tortorici, S.

    2012-12-01

    The Educational Global Climate Model has been used for many years in undergraduate courses and professional development settings to teach the fundamentals of global climate modeling and climate change simulation to students and teachers. While course participants have reported a high level of satisfaction in these courses and overwhelmingly claim that EdGCM projects are worth the effort, there is often a high level of frustration during the initial learning stages. Many of the problems stem from issues related to installation of the software suite and to the length of time it can take to run initial experiments. Two or more days of continuous run time may be required before enough data has been gathered to begin analyses. Asking users to download existing simulation data has not been a solution because the GCM data sets are several gigabytes in size, requiring substantial bandwidth and stable dedicated internet connections. As a means of getting around these problems we have been developing a Web 2.0 utility called EzGCM (Easy G-G-M) which emphasizes that participants learn the steps involved in climate modeling research: constructing a hypothesis, designing an experiment, running a computer model and assessing when an experiment has finished (reached equilibrium), using scientific visualization to support analysis, and finally communicating the results through social networking methods. We use classic climate experiments that can be "rediscovered" through exercises with EzGCM and are attempting to make this Web 2.0 tool an entry point into climate modeling for teachers with little time to cover the subject, users with limited computer skills, and for those who want an introduction to the process before tackling more complex projects with EdGCM.

  19. On the Lulejian-I Combat Model

    DTIC Science & Technology

    1976-08-01

    possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the

  20. North American Observing Systems: An Interagency Group Runs Tests at the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Some 250,000 weather reports are collected by the National Weather Service (NWS) every day. Important measurements are taken by satellites, weather balloons, ground weather stations, airplanes, oceangoing ships, and tethered ocean buoys. Local or global weather models rely on these reports to provide the raw data used as initial conditions for the models to produce a weather prediction.

  1. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  2. Organosolv delignification of Eucalyptus globulus: Kinetic study of autocatalyzed ethanol pulping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliet, M.; Rodriguez, F.; Santos, A.

    2000-01-01

    The autocatalyzed delignification of Eucalyptus globulus in 50% ethanol (w/w) was modeled as the irreversible and consecutive dissolution of initial, bulk, and residual lignin. Their respective contributions to total lignin was estimated as 9, 75, and 16%. Isothermal pulping experiments were carried out to evaluate an empirical kinetic model among eight proposals corresponding to different reaction schemes. The calculated activation energy was found to be 96.5, 98.5, and 40.8 kJ/mol for initial, bulk, and residual delignification, respectively. The influence of hydrogen ion concentration was expressed by a power-law function model. The kinetic model developed here was validated using data frommore » nonisothermal pulping runs.« less

  3. The "ripple effect": Health and community perceptions of the Indigenous Marathon Program on Thursday Island in the Torres Strait, Australia.

    PubMed

    Macniven, Rona; Plater, Suzanne; Canuto, Karla; Dickson, Michelle; Gwynn, Josephine; Bauman, Adrian; Richards, Justin

    2018-02-19

    Physical inactivity is a key health risk among Aboriginal and Torres Strait Islander (Indigenous) Australians. We examined perceptions of the Indigenous Marathon Program (IMP) in a remote Torres Strait island community. Semi-structured interviews with community and program stakeholders (n = 18; 14 Indigenous) examined barriers and enablers to running and the influence of the IMP on the community. A questionnaire asked 104 running event participants (n = 42 Indigenous) about their physical activity behaviours, running motivation and perceptions of program impact. Qualitative data were analysed using thematic content analysis, and quantitative data were analysed using descriptive statistics. Interviews revealed six main themes: community readiness, changing social norms to adopt healthy lifestyles, importance of social support, program appeal to hard-to-reach population groups, program sustainability and initiation of broader healthy lifestyle ripple effects beyond running. Barriers to running in the community were personal (cultural attitudes; shyness) and environmental (infrastructure; weather; dogs). Enablers reflected potential strategies to overcome described barriers. Indigenous questionnaire respondents were more likely to report being inspired to run by IMP runners than non-Indigenous respondents. Positive "ripple" effects of the IMP on running and broader health were described to have occurred through local role modelling of healthy lifestyles by IMP runners that reduced levels of "shame" and embarrassment, a common barrier to physical activity among Indigenous Australians. A high initial level of community readiness for behaviour change was also reported. SO WHAT?: Strategies to overcome this "shame" factor and community readiness measurement should be incorporated into the design of future Indigenous physical activity programs. © 2018 Australian Health Promotion Association.

  4. Development of in vitro models to demonstrate the ability of PecSys®, an in situ nasal gelling technology, to reduce nasal run-off and drip

    PubMed Central

    2013-01-01

    Many of the increasing number of intranasal products available for either local or systemic action can be considered sub-optimal, most notably where nasal drip or run-off give rise to discomfort/tolerability issues or reduced/variable efficacy. PecSys, an in situ gelling technology, contains low methoxy (LM) pectin which gels due to interaction with calcium ions present in nasal fluid. PecSys is designed to spray readily, only forming a gel on contact with the mucosal surface. The present study employed two in vitro models to confirm that gelling translates into a reduced potential for drip/run-off: (i) Using an inclined TLC plate treated with a simulated nasal electrolyte solution (SNES), mean drip length [±SD, n = 10] was consistently much shorter for PecSys (1.5 ± 0.4 cm) than non-gelling control (5.8 ± 1.6 cm); (ii) When PecSys was sprayed into a human nasal cavity cast model coated with a substrate containing a physiologically relevant concentration of calcium, PecSys solution was retained at the site of initial deposition with minimal redistribution, and no evidence of run-off/drip anteriorly or down the throat. In contrast, non-gelling control was significantly more mobile and consistently redistributed with run-off towards the throat. Conclusion In both models PecSys significantly reduced the potential for run-off/drip ensuring that more solution remained at the deposition site. In vivo, this enhancement of retention will provide optimum patient acceptability, modulate drug absorption and maximize the ability of drugs to be absorbed across the nasal mucosa and thus reduce variability in drug delivery. PMID:22803832

  5. Investigation of Wave Energy Converter Effects on the Nearshore Environment: A Month-Long Study in Monterey Bay CA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Jesse D.; Chang, Grace; Magalen, Jason

    2014-09-01

    A modified version of an indust ry standard wave modeling tool, SNL - SWAN, was used to perform model simulations for hourly initial wave conditio ns measured during the month of October 2009. The model was run with an array of 50 wave energy converters (WECs) and compared with model runs without WECs. Maximum changes in H s were found in the lee of the WEC array along the angles of incident wave dire ction and minimal changes were found along the western side of the model domain due to wave shadowing by land. The largest wave height reductions occurredmore » during observed typhoon conditions and resulted in 14% decreases in H s along the Santa Cruz shoreline . Shoreline reductions in H s were 5% during s outh swell wave conditions and negligible during average monthly wave conditions.« less

  6. Parameterization of a numerical 2-D debris flow model with entrainment: a case study of the Faucon catchment, Southern French Alps

    NASA Astrophysics Data System (ADS)

    Hussin, H. Y.; Luna, B. Quan; van Westen, C. J.; Christen, M.; Malet, J.-P.; van Asch, Th. W. J.

    2012-10-01

    The occurrence of debris flows has been recorded for more than a century in the European Alps, accounting for the risk to settlements and other human infrastructure that have led to death, building damage and traffic disruptions. One of the difficulties in the quantitative hazard assessment of debris flows is estimating the run-out behavior, which includes the run-out distance and the related hazard intensities like the height and velocity of a debris flow. In addition, as observed in the French Alps, the process of entrainment of material during the run-out can be 10-50 times in volume with respect to the initially mobilized mass triggered at the source area. The entrainment process is evidently an important factor that can further determine the magnitude and intensity of debris flows. Research on numerical modeling of debris flow entrainment is still ongoing and involves some difficulties. This is partly due to our lack of knowledge of the actual process of the uptake and incorporation of material and due the effect of entrainment on the final behavior of a debris flow. Therefore, it is important to model the effects of this key erosional process on the formation of run-outs and related intensities. In this study we analyzed a debris flow with high entrainment rates that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps). The historic event was back-analyzed using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2-D numerical modeling software. A sensitivity analysis of the rheological and entrainment parameters was carried out and the effects of modeling with entrainment on the debris flow run-out, height and velocity were assessed.

  7. Evaluation of predicted diurnal cycle of precipitation after tests with convection and microphysics schemes in the Eta Model

    NASA Astrophysics Data System (ADS)

    Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.

    2012-04-01

    Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.

  8. Numerical computation of Pop plot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparisonmore » of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.« less

  9. Voluntary wheel running improves recovery from a moderate spinal cord injury.

    PubMed

    Engesser-Cesar, Christie; Anderson, Aileen J; Basso, D Michele; Edgerton, V R; Cotman, Carl W

    2005-01-01

    Recently, locomotor training has been shown to improve overground locomotion in patients with spinal cord injury (SCI). This has triggered renewed interest in the role of exercise in rehabilitation after SCI. However, there are no mouse models for voluntary exercise and recovery of function following SCI. Here, we report voluntary wheel running improves recovery from a SCI in mice. C57Bl/10 female mice received a 60-kdyne T9 contusion injury with an IH impactor after 3 weeks of voluntary wheel running or 3 weeks of standard single housing conditions. Following a 7-day recovery period, running mice were returned to their running wheels. Weekly open-field behavior measured locomotor recovery using the Basso, Beattie and Bresnahan (BBB) locomotor rating scale and the Basso Mouse Scale (BMS) locomotor rating scale, a scale recently developed specifically for mice. Initial experiments using standard rung wheels show that wheel running impaired recovery, but subsequent experiments using a modified flat-surface wheel show improved recovery with exercise. By 14 days post SCI, the modified flat-surface running group had significantly higher BBB and BMS scores than the sedentary group. A repeated measures ANOVA shows locomotor recovery of modified flat-surface running mice was significantly improved compared to sedentary animals (p < 0.05). Locomotor assessment using a ladder beam task also shows a significant improvement in the modified flat-surface runners (p < 0.05). Finally, fibronectin staining shows no significant difference in lesion size between the two groups. These data represent the first mouse model showing voluntary exercise improves recovery after SCI.

  10. Seasonal predictions of equatorial Atlantic SST in a low-resolution CGCM with surface heat flux correction

    NASA Astrophysics Data System (ADS)

    Dippe, Tina; Greatbatch, Richard; Ding, Hui

    2016-04-01

    The dominant mode of interannual variability in tropical Atlantic sea surface temperatures (SSTs) is the Atlantic Niño or Zonal Mode. Akin to the El Niño-Southern Oscillation in the Pacific sector, it is able to impact the climate both of the adjacent equatorial African continent and remote regions. Due to heavy biases in the mean state climate of the equatorial-to-subtropical Atlantic, however, most state-of-the-art coupled global climate models (CGCMs) are unable to realistically simulate equatorial Atlantic variability. In this study, the Kiel Climate Model (KCM) is used to investigate the impact of a simple bias alleviation technique on the predictability of equatorial Atlantic SSTs. Two sets of seasonal forecasting experiments are performed: An experiment using the standard KCM (STD), and an experiment with additional surface heat flux correction (FLX) that efficiently removes the SST bias from simulations. Initial conditions for both experiments are generated by the KCM run in partially coupled mode, a simple assimilation technique that forces the KCM with observed wind stress anomalies and preserves SST as a fully prognostic variable. Seasonal predictions for both sets of experiments are run four times yearly for 1981-2012. Results: Heat flux correction substantially improves the simulated variability in the initialization runs for boreal summer and fall (June-October). In boreal spring (March-May), however, neither the initialization runs of the STD or FLX-experiments are able to capture the observed variability. FLX-predictions show no consistent enhancement of skill relative to the predictions of the STD experiment over the course of the year. The skill of persistence forecasts is hardly beat by either of the two experiments in any season, limiting the usefulness of the few forecasts that show significant skill. However, FLX-forecasts initialized in May recover skill in July and August, the peak season of the Atlantic Niño (anomaly correlation coefficients of about 0.3). Further study is necessary to determine the mechanism that drives this potentially useful recovery.

  11. History of Running is Not Associated with Higher Risk of Symptomatic Knee Osteoarthritis: A Cross-Sectional Study from the Osteoarthritis Initiative

    PubMed Central

    Lo, Grace H.; Driban, Jeffrey B.; Kriska, Andrea M.; McAlindon, Timothy E.; Souza, Richard B.; Petersen, Nancy J.; Storti, Kristi L.; Eaton, Charles B.; Hochberg, Marc C.; Jackson, Rebecca D.; Kwoh, C. Kent; Nevitt, Michael C.; Suarez-Almazor, Maria E.

    2016-01-01

    Objective Regular physical activity, including running, is recommended based on known cardiovascular and mortality benefits. However, controversy exists regarding whether running can be harmful to knees. The purpose of this study is to evaluate the relationship of running with knee pain, radiographic osteoarthritis, and symptomatic osteoarthritis. Methods This was a retrospective cross-sectional study of Osteoarthritis Initiative participants (2004 – 2014) with knee x-ray readings, symptom assessments, and completed lifetime physical activity surveys. Using logistic regression, we evaluated the association of history of leisure running with the outcomes of frequent knee pain, radiographic osteoarthritis, and symptomatic osteoarthritis. Symptomatic osteoarthritis required at least one knee with both radiographic osteoarthritis and pain. Results Of 2637 participants, 55.8% were female; mean age was 64.3 (SD 8.9) years; body mass index was 28.5 (SD 4.9) kg/m2; 29.5% ran at some time in their lives. Unadjusted odds ratios of pain, radiographic osteoarthritis, and symptomatic osteoarthritis for those prior runners and current runners compared to those who never ran were 0.83 and 0.71, p for trend = 0.002, 0.83 and 0.78, p for trend = 0.01, and 0.81 and 0.64, p for trend = 0.0006 respectively. Adjusted models were similar except radiographic osteoarthritis results were attenuated. Conclusions and Relevance There is no increased risk of symptomatic knee osteoarthritis among self-selected runners compared with non-runners in a cohort recruited from the community. In those without osteoarthritis, running does not appear detrimental to the knees. PMID:27333572

  12. Western diet increases wheel running in mice selectively bred for high voluntary wheel running.

    PubMed

    Meek, T H; Eisenmann, J C; Garland, T

    2010-06-01

    Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.

  13. AIDA Model 1.0

    DTIC Science & Technology

    1990-08-01

    Distribution Unlimited Accession Number: 3539 Publication Date: Aug 01, 1990 Title: AIDA Model 1.0 Final Report Corporate Author Or Publisher: Software...Part: 1 Author: D.R.Sloggett Date: 27.7.90 Issue: 23 C.J.Slim Title: AIDA Model 1.0 Final Report i Doc. Ref.: AIDA/3/26/01 U Cross Ref.: AIDA/1/06/01...functionality and integrity. These tests also provided initial performance measures for the AIDA Model 1.0 system. The results from theI baseline runs performed

  14. Initializing numerical weather prediction models with satellite-derived surface soil moisture: Data assimilation experiments with ECMWF's Integrated Forecast System and the TMI soil moisture data set

    NASA Astrophysics Data System (ADS)

    Drusch, M.

    2007-02-01

    Satellite-derived surface soil moisture data sets are readily available and have been used successfully in hydrological applications. In many operational numerical weather prediction systems the initial soil moisture conditions are analyzed from the modeled background and 2 m temperature and relative humidity. This approach has proven its efficiency to improve surface latent and sensible heat fluxes and consequently the forecast on large geographical domains. However, since soil moisture is not always related to screen level variables, model errors and uncertainties in the forcing data can accumulate in root zone soil moisture. Remotely sensed surface soil moisture is directly linked to the model's uppermost soil layer and therefore is a stronger constraint for the soil moisture analysis. For this study, three data assimilation experiments with the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) have been performed for the 2-month period of June and July 2002: a control run based on the operational soil moisture analysis, an open loop run with freely evolving soil moisture, and an experimental run incorporating TMI (TRMM Microwave Imager) derived soil moisture over the southern United States. In this experimental run the satellite-derived soil moisture product is introduced through a nudging scheme using 6-hourly increments. Apart from the soil moisture analysis, the system setup reflects the operational forecast configuration including the atmospheric 4D-Var analysis. Soil moisture analyzed in the nudging experiment is the most accurate estimate when compared against in situ observations from the Oklahoma Mesonet. The corresponding forecast for 2 m temperature and relative humidity is almost as accurate as in the control experiment. Furthermore, it is shown that the soil moisture analysis influences local weather parameters including the planetary boundary layer height and cloud coverage.

  15. Do initial conditions matter? A comparison of model climatologies generated from different initial states

    NASA Technical Reports Server (NTRS)

    Spar, J.; Cohen, C.; Wu, P.

    1981-01-01

    A coarse mesh (8 by 10) 7 layer global climate model was used to compute 15 months of meteorological history in two perpetual January experiments on a water planet (without continents) with a zonally symmetric climatological January sea surface temperature field. In the first of the two water planet experiments the initial atmospheric state was a set of zonal mean values of specific humidity, temperature, and wind at each latitude. In the second experiment the model was initialized with globally uniform mean values of specific humidity and temperature on each sigma level surface, constant surface pressure (1010 mb), and zero wind everywhere. A comparison was made of the mean January climatic states generated by the two water planet experiments. The first two months of each 15 January run were discarded, and 13 month averages were computed from months 3 through 15.

  16. Coordinate and synergistic effects of extensive treadmill exercise and ovariectomy on articular cartilage degeneration.

    PubMed

    Miyatake, Kazumasa; Muneta, Takeshi; Ojima, Miyoko; Yamada, Jun; Matsukura, Yu; Abula, Kahaer; Sekiya, Ichiro; Tsuji, Kunikazu

    2016-05-31

    Although osteoarthritis (OA) is a multifactorial disease, little has been reported regarding the cooperative interaction among these factors on cartilage metabolism. Here we examined the synergistic effect of ovariectomy (OVX) and excessive mechanical stress (forced running) on articular cartilage homeostasis in a mouse model resembling a human postmenopausal condition. Mice were randomly divided into four groups, I: Sham, II: OVX, III: Sham and forced running (60 km in 6 weeks), and IV: OVX and forced running. Histological and immunohistochemical analyses were performed to evaluate the degeneration of articular cartilage and synovitis in the knee joint. Morphological changes of subchondral bone were analyzed by micro-CT. Micro-CT analyses showed significant loss of metaphyseal trabecular bone volume/tissue volume (BV/TV) after OVX as described previously. Forced running increased the trabecular BV/TV in all mice. In the epiphyseal region, no visible alteration in bone morphology or osteophyte formation was observed in any of the four groups. Histological analysis revealed that OVX or forced running respectively had subtle effects on cartilage degeneration. However, the combination of OVX and forced running synergistically enhanced synovitis and articular cartilage degeneration. Although morphological changes in chondrocytes were observed during OA initiation, no signs of bone marrow edema were observed in any of the four experimental groups. We report the coordinate and synergistic effects of extensive treadmill exercise and ovariectomy on articular cartilage degeneration. Since no surgical procedure was performed on the knee joint directly in this model, this model is useful in addressing the molecular pathogenesis of naturally occurring OA.

  17. Soil Moisture and Snow Cover: Active or Passive Elements of Climate

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Robertson, Franklin R.; Roads, John O.; Arnold, James E. (Technical Monitor)

    2002-01-01

    A key question is the extent to which surface effects such as soil moisture and snow cover are simply passive elements or whether they can affect the evolution of climate on seasonal and longer time scales. We have constructed ensembles of predictability studies using the NCAR CCM3 in which we compared the relative roles of initial surface and atmospheric conditions over the central and western U.S. in determining the subsequent evolution of soil moisture and of snow cover. Results from simulations with realistic soil moisture anomalies indicate that internal climate variability may be the strongest factor, with some indication that the initial atmospheric state is also important. Model runs with exaggerated soil moisture reductions (near-desert conditions) showed a much larger effect, with warmer surface temperatures, reduced precipitation, and lower surface pressures; the latter indicating a response of the atmospheric circulation. These results suggest the possibility of a threshold effect in soil moisture, whereby an anomaly must be of a sufficient size before it can have a significant impact on the atmospheric circulation and climate. Results from simulations with realistic snow cover anomalies indicate that the time of year can be crucial. When introduced in late winter, these anomalies strongly affected the subsequent evolution of snow cover. When introduced in early winter, however, little or no effect is seen on the subsequent snow cover. Runs with greatly exaggerated initial snow cover indicate that the high reflectivity of snow is the most important process by which snow cover can impact climate, through lower surface temperatures and increased surface pressures. The results to date were obtained for model runs with present-day conditions. We are currently analyzing runs made with projected forcings for the 21st century to see if these results are modified in any way under likely scenarios of future climate change. An intriguing new statistical technique involving 'clustering' is developed to assist in this analysis.

  18. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  19. Climateprediction.com: Public Involvement, Multi-Million Member Ensembles and Systematic Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.

    2001-12-01

    The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.

  20. In search of the pitching momentum that enables some lizards to sustain bipedal running at constant speeds

    PubMed Central

    Van Wassenbergh, Sam; Aerts, Peter

    2013-01-01

    The forelimbs of lizards are often lifted from the ground when they start sprinting. Previous research pointed out that this is a consequence of the propulsive forces from the hindlimbs. However, despite forward acceleration being hypothesized as necessary to lift the head, trunk and forelimbs, some species of agamids, teiids and basilisks sustain running in a bipedal posture at a constant speed for a relatively long time. Biomechanical modelling of steady bipedal running in the agamid Ctenophorus cristatus now shows that a combination of three mechanisms must be present to generate the angular impulse needed to cancel or oppose the effect of gravity. First, the trunk must be lifted significantly to displace the centre of mass more towards the hip joint. Second, the nose-up pitching moment resulting from aerodynamic forces exerted at the lizard's surface must be taken into account. Third, the vertical ground-reaction forces at the hindlimb must show a certain degree of temporal asymmetry with higher forces closer to the instant of initial foot contact. Such asymmetrical vertical ground-reaction force profiles, which differ from the classical spring-mass model of bipedal running, seem inherent to the windmilling, splayed-legged running style of lizards. PMID:23658116

  1. Impact and intrusion of the foot of a lizard running rapidly on sand

    NASA Astrophysics Data System (ADS)

    Li, Chen; Hsieh, Tonia; Umbanhowar, Paul; Goldman, Daniel

    2012-11-01

    The desert-dwelling zebra-tailed lizard (Callisaurus draconoides, 10 cm, 10 g) runs rapidly (~10 BL/s) on granular media (GM) like sand and gravel. On loosely packed GM, its large hind feet penetrate into the substrate during each step. Based on above-ground observation, a previous study (Li et al., JEB 2012) hypothesized that the hind foot rotated in the vertical plane subsurface to generate lift. To explain the observed center-of-mass dynamics, the model assumed that ground reaction force was dominated by speed-independent frictional drag. Here we use x-ray high speed video to obtain subsurface foot kinematics of the lizard running on GM, which confirms the hypothesized subsurface foot rotation following rapid foot impact at touchdown. However, using impact force measurements, a resistive force model, and the observed foot kinematics, we find that impact force during initial foot touchdown and speed-independent frictional drag during rotation only account for part of the required lift to support locomotion. This suggests that the rapid foot rotation further allows the lizard to utilize inertial forces from the local acceleration of the substrate (particles), similar to small robots running on GM (Qian et al., RSS 2012) and the basilisk (Jesus) lizard running on water.

  2. Examining the Impacts of High-Resolution Land Surface Initialization on Model Predictions of Convection in the Southeastern U.S.

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Santos, Pablo; Medlin, Jeffrey M.; Jedlovec, Gary J.

    2009-01-01

    One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within physics parameterizations, model resolution limitations, as well as uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture and temperature, ground fluxes, and vegetation are necessary to better simulate the interactions between the land surface and atmosphere, and ultimately improve predictions of local circulations and summertime pulse convection. The NASA Short-term Prediction Research and Transition (SPORT) Center has been conducting studies to examine the impacts of high-resolution land surface initialization data generated by offline simulations of the NASA Land Informatiot System (LIS) on subsequent numerical forecasts using the Weather Research and Forecasting (WRF) model (Case et al. 2008, to appear in the Journal of Hydrometeorology). Case et al. presents improvements to simulated sea breezes and surface verification statistics over Florida by initializing WRF with land surface variables from an offline LIS spin-up run, conducted on the exact WRF domain and resolution. The current project extends the previous work over Florida, focusing on selected case studies of typical pulse convection over the southeastern U.S., with an emphasis on improving local short-term WRF simulations over the Mobile, AL and Miami, FL NWS county warning areas. Future efforts may involve examining the impacts of assimilating remotely-sensed soil moisture data, and/or introducing weekly greenness vegetation fraction composites (as opposed to monthly climatologies) into ol'fline NASA LIS runs. Based on positive impacts, the offline LIS runs could be transitioned into an operational mode, providing land surface initialization data to NWS forecast offices in real time.

  3. Institutionalizing Faculty Mentoring within a Community of Practice Model

    ERIC Educational Resources Information Center

    Smith, Emily R.; Calderwood, Patricia E.; Storms, Stephanie Burrell; Lopez, Paula Gill; Colwell, Ryan P.

    2016-01-01

    In higher education, faculty work is typically enacted--and rewarded--on an individual basis. Efforts to promote collaboration run counter to the individual and competitive reward systems that characterize higher education. Mentoring initiatives that promote faculty collaboration and support also defy the structural and cultural norms of higher…

  4. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  5. 77 FR 58255 - Takes of Marine Mammals Incidental to Specified Activities; Marine Geophysical Survey off the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ... straight portions of the track lines as well as the initial portions of the run-out (offshore) sections and later portions of the run-in (inshore) sections. During turns and most of the initial portion of the run... propeller has four blades and the shaft typically rotates at 750 revolutions per minute. The vessel also has...

  6. Towards the use of HYCOM in Coupled ENSO Prediction: Assessment of ENSO Skill in Forced Global HYCOM

    DTIC Science & Technology

    2016-08-10

    CICE spun-up state forced with climatological surface atmospheric fluxes. This run was initialized from Generalized Digital Environmental Model4...GDEM4) climatological temperature and salinity. It was configured with 41layers. 2. Global 0.72° HYCOM/CICE forced with NOGAPS for 2003-2012. The same...surface temperature, sea-ice concentration, and precipitation products. It was initialized from Levitus-PHC2 climatology . It was configured with 32 layers

  7. A commercially viable virtual reality knee arthroscopy training system.

    PubMed

    McCarthy, A D; Hollands, R J

    1998-01-01

    Arthroscopy is a minimally invasive form of surgery used to inspect joints. It is complex to learn yet current training methods appear inadequate, thus negating the potential benefits to the patient. This paper describes the development and initial assessment of a cost-effective virtual reality based system for training surgeons in arthroscopy of the knee. The system runs on a P.C. Initial assessments by surgeons have been positive and current developments in deformable models are described.

  8. A Transient Initialization Routine of the Community Ice Sheet Model for the Greenland Ice Sheet

    NASA Astrophysics Data System (ADS)

    van der Laan, Larissa; van den Broeke, Michiel; Noël, Brice; van de Wal, Roderik

    2017-04-01

    The Community Ice Sheet Model (CISM) is to be applied in future simulations of the Greenland Ice Sheet under a range of climate change scenarios, determining the sensitivity of the ice sheet to individual climatic forcings. In order to achieve reliable results regarding ice sheet stability and assess the probability of future occurrence of tipping points, a realistic initial ice sheet geometry is essential. The current work describes and evaluates the development of a transient initialization routine, using NGRIP 18O isotope data to create a temperature anomaly field. Based on the latter, surface mass balance components runoff and precipitation are perturbed for the past 125k years. The precipitation and runoff fields originate from a downscaled 1 km resolution version of the regional climate model RACMO2.3 for the period 1961-1990. The result of the initialization routine is a present-day ice sheet with a transient memory of the last glacial-interglacial cycle, which will serve as the future runs' initial condition.

  9. Long-run evolution of the global economy: 2. Hindcasts of innovation and growth

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2015-03-01

    Long-range climate forecasts rely upon integrated assessment models that link the global economy to greenhouse gas emissions. This paper evaluates an alternative economic framework, outlined in Part 1, that is based on physical principles rather than explicitly resolved societal dynamics. Relative to a reference model of persistence in trends, model hindcasts that are initialized with data from 1950 to 1960 reproduce trends in global economic production and energy consumption between 2000 and 2010 with a skill score greater than 90%. In part, such high skill appears to be because civilization has responded to an impulse of fossil fuel discovery in the mid-twentieth century. Forecasting the coming century will be more of a challenge because the effect of the impulse appears to have nearly run its course. Nonetheless, the model offers physically constrained futures for the coupled evolution of civilization and climate during the Anthropocene.

  10. Sensitivity of a mesoscale model to initial specification of relative humidity, liquid water and vertical motion

    NASA Technical Reports Server (NTRS)

    Kalb, M. W.; Perkey, D. J.

    1985-01-01

    The influence of synoptic scale initial conditions on the accuracy of mesoscale precipitation modeling is investigated. Attention is focused on the relative importance of the water vapor, cloud water, rain water, and vertical motion, with the analysis carried out using the Limited Area Mesoscale Prediction System (LAMPS). The fully moist primitive equation model has 15 levels and a terrain-following sigma coordinate system. A K-theory approach was implemented to model the planetary boundary layer. A total of 15 sensitivity simulations were run to investigate the effects of the synoptic initial conditions of the four atmospheric variables. The absence of synoptic cloud and rain water amounts in the initialization caused a 2 hr delay in the onset of precipitation. The delay was increased if synoptic-scale vertical motion was used instead of mesoscale values. Both the delays and a choice of a smoothed moisture field resulted in underestimations of the total rainfall.

  11. The Initial Conditions and Evolution of Isolated Galaxy Models: Effects of the Hot Gas Halo

    NASA Astrophysics Data System (ADS)

    Hwang, Jeong-Sun; Park, Changbom; Choi, Jun-Hwan

    2013-02-01

    We construct several Milky Way-like galaxy models containing a gas halo (as well as gaseous and stellar disks, a dark matter halo, and a stellar bulge) following either an isothermal or an NFW density profile with varying mass and initial spin. In addition, galactic winds associated with star formation are tested in some of the simulations. We evolve these isolated galaxy models using the GADGET-3 N-body/hydrodynamic simulation code, paying particular attention to the effects of the gaseous halo on the evolution. We find that the evolution of the models is strongly affected by the adopted gas halo component, particularly in the gas dissipation and the star formation activity in the disk. The model without a gas halo shows an increasing star formation rate (SFR) at the beginning of the simulation for some hundreds of millions of years and then a continuously decreasing rate to the end of the run at 3 Gyr. Whereas the SFRs in the models with a gas halo, depending on the density profile and the total mass of the gas halo, emerge to be either relatively flat throughout the simulations or increasing until the middle of the run (over a gigayear) and then decreasing to the end. The models with the more centrally concentrated NFW gas halo show overall higher SFRs than those with the isothermal gas halo of the equal mass. The gas accretion from the halo onto the disk also occurs more in the models with the NFW gas halo, however, this is shown to take place mostly in the inner part of the disk and not to contribute significantly to the star formation unless the gas halo has very high density at the central part. The rotation of a gas halo is found to make SFR lower in the model. The SFRs in the runs including galactic winds are found to be lower than those in the same runs but without winds. We conclude that the effects of a hot gaseous halo on the evolution of galaxies are generally too significant to be simply ignored. We also expect that more hydrodynamical processes in galaxies could be understood through numerical simulations employing both gas disk and gas halo components.

  12. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    NASA Astrophysics Data System (ADS)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  13. Explosive Model Tarantula 4d/JWL++ Calibration of LX-17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Vitello, P A

    2008-09-30

    Tarantula is an explosive kinetic package intended to do detonation, shock initiation, failure, corner-turning with dead zones, gap tests and air gaps in reactive flow hydrocode models. The first, 2007-2008 version with monotonic Q is here run inside JWL++ with square zoning from 40 to 200 zones/cm on ambient LX-17. The model splits the rate behavior in every zone into sections set by the hydrocode pressure, P + Q. As the pressure rises, we pass through the no-reaction, initiation, ramp-up/failure and detonation sections sequentially. We find that the initiation and pure detonation rate constants are largely insensitive to zoning butmore » that the ramp-up/failure rate constant is extremely sensitive. At no time does the model pass every test, but the pressure-based approach generally works. The best values for the ramp/failure region are listed here in Mb units.« less

  14. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  15. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  16. Wheel-running reinforcement in free-feeding and food-deprived rats.

    PubMed

    Belke, Terry W; Pierce, W David

    2016-03-01

    Rats experiencing sessions of 30min free access to wheel running were assigned to ad-lib and food-deprived groups, and given additional sessions of free wheel activity. Subsequently, both ad-lib and deprived rats lever pressed for 60s of wheel running on fixed ratio (FR) 1, variable ratio (VR) 3, VR 5, and VR 10 schedules, and on a response-initiated variable interval (VI) 30s schedule. Finally, the ad-lib rats were switched to food deprivation and the food-deprived rats were switched to free food, as rats continued responding on the response-initiated VI 30-s schedule. Wheel running functioned as reinforcement for both ad-lib and food-deprived rats. Food-deprived rats, however, ran faster and had higher overall lever-pressing rates than free-feeding rats. On the VR schedules, wheel-running rates positively correlated with local and overall lever pressing rates for deprived, but not ad-lib rats. On the response-initiated VI 30s schedule, wheel-running rates and lever-pressing rates changed for ad-lib rats switched to food deprivation, but not for food-deprived rats switched to free-feeding. The overall pattern of results suggested different sources of control for wheel running: intrinsic motivation, contingencies of automatic reinforcement, and food-restricted wheel running. An implication is that generalizations about operant responding for wheel running in food-deprived rats may not extend to wheel running and operant responding of free-feeding animals. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Wavelet data compression for archiving high-resolution icosahedral model data

    NASA Astrophysics Data System (ADS)

    Wang, N.; Bao, J.; Lee, J.

    2011-12-01

    With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.

  18. Aftermarket Performance of Health Care and Biopharmaceutical IPOs: Evidence From ASEAN Countries

    PubMed Central

    Komenkul, Kulabutr; Kiranand, Santi

    2017-01-01

    We examine the evidence from the long-run abnormal returns using data for 76 health care and biopharmaceutical initial public offerings (IPOs) listed in a 29-year period between 1986 and 2014 in the Association of Southeast Asian Nations (ASEAN) countries such as Indonesia, Malaysia, Singapore, Thailand, the Philippines, Vietnam, Myanmar, and Laos. Based on the event-time approach, the 3-year stock returns of the IPOs are investigated using cumulative abnormal return (CAR) and buy-and-hold abnormal return (BHAR). As a robustness check, the calendar-time approach, related to the market model as well as Fama-French and Carhart models, was applied for verifying long-run abnormal returns. We found evidence that the health care IPOs overperform in the long-run, irrespective of the alternative benchmarks and methods. In addition, when we divide our sample into 5 groups by listing countries, our results show that the health care stock prices of the Singaporean firms behaved differently from those of most of the other firms in ASEAN. The Singaporean IPOs are characterized by a worse post-offering performance, whereas the IPOs of Malaysian and Thai health care companies performed better in the long-run. PMID:28853306

  19. Aftermarket Performance of Health Care and Biopharmaceutical IPOs: Evidence From ASEAN Countries.

    PubMed

    Komenkul, Kulabutr; Kiranand, Santi

    2017-01-01

    We examine the evidence from the long-run abnormal returns using data for 76 health care and biopharmaceutical initial public offerings (IPOs) listed in a 29-year period between 1986 and 2014 in the Association of Southeast Asian Nations (ASEAN) countries such as Indonesia, Malaysia, Singapore, Thailand, the Philippines, Vietnam, Myanmar, and Laos. Based on the event-time approach, the 3-year stock returns of the IPOs are investigated using cumulative abnormal return (CAR) and buy-and-hold abnormal return (BHAR). As a robustness check, the calendar-time approach, related to the market model as well as Fama-French and Carhart models, was applied for verifying long-run abnormal returns. We found evidence that the health care IPOs overperform in the long-run, irrespective of the alternative benchmarks and methods. In addition, when we divide our sample into 5 groups by listing countries, our results show that the health care stock prices of the Singaporean firms behaved differently from those of most of the other firms in ASEAN. The Singaporean IPOs are characterized by a worse post-offering performance, whereas the IPOs of Malaysian and Thai health care companies performed better in the long-run.

  20. The General Circulation Model Response to a North Pacific SST Anomaly: Dependence on Time Scale and Pattern Polarity.

    NASA Astrophysics Data System (ADS)

    Kushnir, Yochanan; Lau, Ngar-Cheung

    1992-04-01

    A general circulation model was integrated with perpetual January conditions and prescribed sea surface temperature (SST) anomalies in the North Pacific. A characteristic pattern with a warm region centered northeast of Hawaii and a cold region along the western seaboard of North America was alternately added to and subtracted from the climatological SST field. Long 1350-day runs, as well as short 180-day runs, each starting from different initial conditions, were performed. The results were compared to a control integration with climatological SSTs.The model's quasi-stationary response does not exhibit a simple linear relationship with the polarity of the prescribed SST anomaly. In the short runs with a negative SST anomaly over the central ocean, a large negative height anomaly, with an equivalent barotropic vertical structure, occurs over the Gulf of Alaska. For the same SST forcing, the long run yields a different response pattern in which an anomalous high prevails over northern Canada and the Alaskan Peninsula. A significant reduction in the northward heat flux associated with baroclinic eddies and a concomitant reduction in convective heating occur along the model's Pacific storm track. In the runs with a positive SST anomaly over the central ocean, the average height response during the first 90-day period of the short runs is too weak to be significant. In the subsequent 90-day period and in the long run an equivalent barotropic low occurs downstream from the warm SST anomaly. All positive anomaly runs exhibit little change in baroclinic eddy activity or in the patterns of latent heat release. Horizontal momentum transports by baroclinic eddies appear to help sustain the quasi-stationary response in the height field regardless of the polarity of the SST anomaly. These results emphasize the important role played by baroclinic eddies in determining the quasi-stationary response to midlatitude SST anomalies. Differences between the response patterns of the short and long integrations may be relevant to future experimental design for studying air-sea interactions in the extratropies.

  1. Evaluation of Stratospheric Transport in New 3D Models Using the Global Modeling Initiative Grading Criteria

    NASA Technical Reports Server (NTRS)

    Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.

  2. "Hit-and-Run" transcription: de novo transcription initiated by a transient bZIP1 "hit" persists after the "run".

    PubMed

    Doidy, Joan; Li, Ying; Neymotin, Benjamin; Edwards, Molly B; Varala, Kranthi; Gresham, David; Coruzzi, Gloria M

    2016-02-03

    Dynamic transcriptional regulation is critical for an organism's response to environmental signals and yet remains elusive to capture. Such transcriptional regulation is mediated by master transcription factors (TF) that control large gene regulatory networks. Recently, we described a dynamic mode of TF regulation named "hit-and-run". This model proposes that master TF can interact transiently with a set of targets, but the transcription of these transient targets continues after the TF dissociation from the target promoter. However, experimental evidence validating active transcription of the transient TF-targets is still lacking. Here, we show that active transcription continues after transient TF-target interactions by tracking de novo synthesis of RNAs made in response to TF nuclear import. To do this, we introduced an affinity-labeled 4-thiouracil (4tU) nucleobase to specifically isolate newly synthesized transcripts following conditional TF nuclear import. Thus, we extended the TARGET system (Transient Assay Reporting Genome-wide Effects of Transcription factors) to include 4tU-labeling and named this new technology TARGET-tU. Our proof-of-principle example is the master TF Basic Leucine Zipper 1 (bZIP1), a central integrator of metabolic signaling in plants. Using TARGET-tU, we captured newly synthesized mRNAs made in response to bZIP1 nuclear import at a time when bZIP1 is no longer detectably bound to its target. Thus, the analysis of de novo transcripomics demonstrates that bZIP1 may act as a catalyst TF to initiate a transcriptional complex ("hit"), after which active transcription by RNA polymerase continues without the TF being bound to the gene promoter ("run"). Our findings provide experimental proof for active transcription of transient TF-targets supporting a "hit-and-run" mode of action. This dynamic regulatory model allows a master TF to catalytically propagate rapid and broad transcriptional responses to changes in environment. Thus, the functional read-out of de novo transcripts produced by transient TF-target interactions allowed us to capture new models for genome-wide transcriptional control.

  3. A Unique Test Facility to Measure Liner Performance with a Summary of Initial Test Results

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Gaeta, R. J., Jr.

    1997-01-01

    A very ambitious study was initiated to obtain detailed acoustic and flow data with and without a liner in a duct containing a mean flow so that available theoretical models of duct liners can be validated. A unique flow-duct facility equipped with a sound source, liner box, flush-walled microphones, traversable microphones and traversable pressure and temperature probes was built. A unique set of instrumentation boxes equipped with computer controlled traverses were designed and built that allowed measurements of Mach number, temperature, SPLs and phases in two planes upstream of a liner section and two planes downstream at a large number of measurement points. Each pair of planes provided acoustic pressure gradients for use in estimating the particle velocities. Specially-built microphone probes were employed to make measurements in the presence of the flow. A microphone traverse was also designed to measure the distribution of SPLs and phases from the beginning of the liner to its end along the duct axis. All measurements were made with the help of cross-correlation techniques to reject flow noise and/or other obtrusive noise, if any. The facility was designed for future use at temperatures as high as 1500 F. In order to validate 2-D models in the presence of mean flow, the flow duct was equipped with a device to modify boundary layer flow on the smaller sides of a rectangular duct to simulate 2-D flow. A massive amount of data was acquired for use in validating duct liner models and will be provided to NASA in an electronic form. It was found that the sound in the plane-wave regime is well behaved within the duct and the results are repeatable from one run to another. At the higher frequencies corresponding to the higher-order modes, the SPLs within a duct are not repeatable from run to run. In fact, when two or more modes have the same frequency (i.e., for the degenerate modes), the SPLs in the duct varied between 2 dB to 12 dB from run to run. This made the calibration of the microphone probes extremely difficult at the higher frequencies.

  4. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae).

    PubMed

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a 'delay phase', an 'increase phase' and a 'stagnation phase' were identified. In general, the runs of succession could be described by four different parameters: (1) 'Initial degradation level', (2) 'delay', (3) 'increase rate' and (4) 'recovery level'. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession.

  5. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  6. Biomechanical Differences of Foot-Strike Patterns During Running: A Systematic Review With Meta-analysis.

    PubMed

    Almeida, Matheus O; Davis, Irene S; Lopes, Alexandre D

    2015-10-01

    Systematic review with meta-analysis. To determine the biomechanical differences between foot-strike patterns used when running. Strike patterns during running have received attention in the recent literature due to their potential mechanical differences and associated injury risks. Electronic databases (MEDLINE, Embase, LILACS, SciELO, and SPORTDiscus) were searched through July 2014. Studies (cross-sectional, case-control, prospective, and retrospective) comparing the biomechanical characteristics of foot-strike patterns during running in distance runners at least 18 years of age were included in this review. Two independent reviewers evaluated the risk of bias. A meta-analysis with a random-effects model was used to combine the data from the included studies. Sixteen studies were included in the final analysis. In the meta-analyses of kinematic variables, significant differences between forefoot and rearfoot strikers were found for foot and knee angle at initial contact and knee flexion range of motion. A forefoot-strike pattern resulted in a plantar-flexed ankle position and a more flexed knee position, compared to a dorsiflexed ankle position and a more extended knee position for the rearfoot strikers, at initial contact with the ground. In the comparison of rearfoot and midfoot strikers, midfoot strikers demonstrated greater ankle dorsiflexion range of motion and decreased knee flexion range of motion compared to rearfoot strikers. For kinetic variables, the meta-analysis revealed that rearfoot strikers had higher vertical loading rates compared to forefoot strikers. There are differences in kinematic and kinetic characteristics between foot-strike patterns when running. Clinicians should be aware of these characteristics to help in the management of running injuries and advice on training.

  7. Is There an Association Between a History of Running and Symptomatic Knee Osteoarthritis? A Cross-Sectional Study From the Osteoarthritis Initiative.

    PubMed

    Lo, Grace H; Driban, Jeffrey B; Kriska, Andrea M; McAlindon, Timothy E; Souza, Richard B; Petersen, Nancy J; Storti, Kristi L; Eaton, Charles B; Hochberg, Marc C; Jackson, Rebecca D; Kent Kwoh, C; Nevitt, Michael C; Suarez-Almazor, Maria E

    2017-02-01

    Regular physical activity, including running, is recommended based on known cardiovascular and mortality benefits. However, controversy exists regarding whether running can be harmful to knees. The purpose of this study is to evaluate the relationship of running with knee pain, radiographic osteoarthritis (OA), and symptomatic OA. This was a retrospective cross-sectional study of Osteoarthritis Initiative participants (2004-2014) with knee radiograph readings, symptom assessments, and completed lifetime physical activity surveys. Using logistic regression, we evaluated the association of history of leisure running with the outcomes of frequent knee pain, radiographic OA, and symptomatic OA. Symptomatic OA required at least 1 knee with both radiographic OA and pain. Of 2,637 participants, 55.8% were female, the mean ± SD age was 64.3 ± 8.9 years, and the mean ± SD body mass index was 28.5 ± 4.9 kg/m 2 ; 29.5% of these participants ran at some time in their lives. Unadjusted odds ratios of pain, radiographic OA, and symptomatic OA for those prior runners and current runners compared to those who never ran were 0.83 and 0.71 (P for trend = 0.002), 0.83 and 0.78 (P for trend = 0.01), and 0.81 and 0.64 (P for trend = 0.0006), respectively. Adjusted models were similar, except radiographic OA results were attenuated. There is no increased risk of symptomatic knee OA among self-selected runners compared with nonrunners in a cohort recruited from the community. In those without OA, running does not appear to be detrimental to the knees. © 2016, American College of Rheumatology.

  8. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  9. Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop

    DTIC Science & Technology

    2008-08-01

    obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink

  10. Modeling ecohydrologic processes at Hubbard Brook: Initial results for Watershed 6 stream discharge and chemistry

    EPA Science Inventory

    The Hubbard Brook Long Term Ecological Research site has produced some of the most extensive and long-running databases on the hydrology, biology and chemistry of forest ecosystem responses to climate and forest harvest. We used these long-term databases to calibrate and apply G...

  11. Experimental test of an online ion-optics optimizer

    NASA Astrophysics Data System (ADS)

    Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.

    2018-07-01

    A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.

  12. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  13. Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing

    DTIC Science & Technology

    1994-07-01

    implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing

  14. A coupled atmosphere-ocean-wave modeling approach for a Tropical Like Cyclone in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Ricchi, Antonio; Miglietta, M. Marcello; Barbariol, Francesco; Benetazzo, Alvise; Bonaldo, Davide; Falcieri, Francesco; Russo, Aniello; Sclavo, Mauro; Carniel, Sandro

    2016-04-01

    In November 6-8, 2011, in the Balearic islands an extra-tropical depression developed into a Tropical-Like Cyclone (TLC) characterized by a deep-warm core, leading to a mean sea level pressure minimum of about 991 hPa, 10 m wind speeds higher than 28 m/s around the eye, and very intense rainfall, especially in the Gulf of Lion. To explore in detail the effect of the sea surface temperature on the Medicane evolution, we employed the coupled modeling system COAWST, which consists of the ROMS model for the hydrodynamic part, the WRF model for the meteorological part, and the SWAN for the surface wave modeling. All model run over 5 km domain (same domain for ROMS and SWAN). COAWST was used with different configurations: in Stand Alone (SA) mode (that is, with only the atmospheric part), in atmosphere-ocean coupled mode (AO), and in a fully coupled version including also surface waves (AOW). Several sensitivity simulations performed with the SA approach were undertaken to simulate the TLC evolution. Especially in the later stage of the lifetime, when the cyclone was weaker, the predictability appears limited. Sensitivity simulations have considered the effect of the cumulus scheme (using an explicit scheme the Medicane does not develop and remains an extra-tropical depression) and the PBL scheme (using MYJ or MYNN resulting "Medicane" are extremely similar, although the roughness appears rather different among the two experiments). Comparing the three runs, the effects of different simulations on the Medicane tracks are significant only in the later stage of the cyclone lifetime. In the overall modeled basin, wind intensity is higher in the SA case w.r.t. both coupled runs. When compared to case AO, winds are about 1 m/s larger, even though the spatial distribution is very similar (possibly because of the lower SST produced by case AO). Case AOW produces less intense winds then SA and AO case in the areas where the wave is most developed (differences are about 2-4 m/s), while they are more intense in the neighborhood of the eye of the cyclone. Moreover, the inclusion of the wave model (AOW) has implications in the water column, by changing the depth of the ocean mixed layer along the track of the Medicane, so that eventually the SST in AOW run is colder than in AO. The date chosen for the run initialization appears important: an earlier initial condition allows to properly simulate the evolution of the cyclone from the cyclogenesis and to include the effect of the air-sea interaction through the coupled models.

  15. Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro

    2008-03-01

    We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.

  16. Incorporating a Full-Physics Meteorological Model into an Applied Atmospheric Dispersion Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Allwine, K Jerry; Rutz, Frederick C.

    2004-08-23

    A new modeling system has been developed to provide a non-meteorologist with tools to predict air pollution transport in regions of complex terrain. This system couples the Penn State/NCAR Mesoscale Model 5 (MM5) with Earth Tech’s CALMET-CALPUFF system using a unique Graphical User Interface (GUI) developed at Pacific Northwest National Laboratory. This system is most useful in data-sparse regions, where there are limited observations to initialize the CALMET model. The user is able to define the domain of interest, provide details about the source term, and enter a surface weather observation through the GUI. The system then generates initial conditionsmore » and time constant boundary conditions for use by MM5. MM5 is run and the results are piped to CALPUFF for the dispersion calculations. Contour plots of pollutant concentration are prepared for the user. The primary advantages of the system are the streamlined application of MM5 and CALMET, limited data requirements, and the ability to run the coupled system on a desktop or laptop computer. In comparison with data collected as part of a field campaign, the new modeling system shows promise that a full-physics mesoscale model can be used in an applied modeling system to effectively simulate locally thermally-driven winds with minimal observations as input. An unexpected outcome of this research was how well CALMET represented the locally thermally-driven flows.« less

  17. The Met Office Coupled Atmosphere/Land/Ocean/Sea-Ice Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Mirouze, Isabelle; King, Robert; Martin, Matthew; Hines, Adrian

    2015-04-01

    The Met Office has developed a weakly-coupled data assimilation (DA) system using the global coupled model HadGEM3 (Hadley Centre Global Environment Model, version 3). At present the analysis from separate ocean and atmosphere DA systems are combined to produced coupled forecasts. The aim of coupled DA is to produce a more consistent analysis for coupled forecasts which may lead to less initialisation shock and improved forecast performance. The HadGEM3 coupled model combines the atmospheric model UM (Unified Model) at 60 km horizontal resolution on 85 vertical levels, the ocean model NEMO (Nucleus for European Modelling of the Ocean) at 25 km (at the equator) horizontal resolution on 75 vertical levels, and the sea-ice model CICE at the same resolution as NEMO. The atmosphere and the ocean/sea-ice fields are coupled every 1-hour using the OASIS coupler. The coupled model is corrected using two separate 6-hour window data assimilation systems: a 4D-Var for the atmosphere with associated soil moisture content nudging and snow analysis schemes on the one hand, and a 3D-Var FGAT for the ocean and sea-ice on the other hand. The background information in the DA systems comes from a previous 6-hour forecast of the coupled model. To isolate the impact of the coupled DA, 13-month experiments have been carried out, including 1) a full atmosphere/land/ocean/sea-ice coupled DA run, 2) an atmosphere-only run forced by OSTIA SSTs and sea-ice with atmosphere and land DA, and 3) an ocean-only run forced by atmospheric fields from run 2 with ocean and sea-ice DA. In addition, 5-day and 10-day forecast runs, have been produced from initial conditions generated by either run 1 or a combination of runs 2 and 3. The different results have been compared to each other and, whenever possible, to other references such as the Met Office atmosphere and ocean operational analyses or the OSTIA SST data. The performance of the coupled DA is similar to the existing separate ocean and atmosphere DA systems. This is despite the fact that the assimilation error covariances have not yet been tuned for coupled DA. In addition, the coupled model also exhibits some biases which do not affect the uncoupled models. An example is precipitation and run off errors affecting the ocean salinity. This of course impacts the performance of the ocean data assimilation. This does, however, highlight a particular benefit of data assimilation in that it can help to identify short term model biases by using, for example, the differences between the observations and model background (innovations) and the mean increments. Coupled DA has the distinct advantage that this gives direct information about the coupled model short term biases. By identifying the biases and developing solutions this will improve the short range coupled forecasts, and may also improve the coupled model on climate timescales.

  18. Comparison of tobacco control scenarios: quantifying estimates of long-term health impact using the DYNAMO-HIA modeling tool.

    PubMed

    Kulik, Margarete C; Nusselder, Wilma J; Boshuizen, Hendriek C; Lhachimi, Stefan K; Fernández, Esteve; Baili, Paolo; Bennett, Kathleen; Mackenbach, Johan P; Smit, H A

    2012-01-01

    There are several types of tobacco control interventions/policies which can change future smoking exposure. The most basic intervention types are 1) smoking cessation interventions 2) preventing smoking initiation and 3) implementation of a nationwide policy affecting quitters and starters simultaneously. The possibility for dynamic quantification of such different interventions is key for comparing the timing and size of their effects. We developed a software tool, DYNAMO-HIA, which allows for a quantitative comparison of the health impact of different policy scenarios. We illustrate the outcomes of the tool for the three typical types of tobacco control interventions if these were applied in the Netherlands. The tool was used to model the effects of different types of smoking interventions on future smoking prevalence and on health outcomes, comparing these three scenarios with the business-as-usual scenario. The necessary data input was obtained from the DYNAMO-HIA database which was assembled as part of this project. All smoking interventions will be effective in the long run. The population-wide strategy will be most effective in both the short and long term. The smoking cessation scenario will be second-most effective in the short run, though in the long run the smoking initiation scenario will become almost as effective. Interventions aimed at preventing the initiation of smoking need a long time horizon to become manifest in terms of health effects. The outcomes strongly depend on the groups targeted by the intervention. We calculated how much more effective the population-wide strategy is, in both the short and long term, compared to quit smoking interventions and measures aimed at preventing the initiation of smoking. By allowing a great variety of user-specified choices, the DYNAMO-HIA tool is a powerful instrument by which the consequences of different tobacco control policies and interventions can be assessed.

  19. Comparison of Tobacco Control Scenarios: Quantifying Estimates of Long-Term Health Impact Using the DYNAMO-HIA Modeling Tool

    PubMed Central

    Kulik, Margarete C.; Nusselder, Wilma J.; Boshuizen, Hendriek C.; Lhachimi, Stefan K.; Fernández, Esteve; Baili, Paolo; Bennett, Kathleen; Mackenbach, Johan P.; Smit, H. A.

    2012-01-01

    Background There are several types of tobacco control interventions/policies which can change future smoking exposure. The most basic intervention types are 1) smoking cessation interventions 2) preventing smoking initiation and 3) implementation of a nationwide policy affecting quitters and starters simultaneously. The possibility for dynamic quantification of such different interventions is key for comparing the timing and size of their effects. Methods and Results We developed a software tool, DYNAMO-HIA, which allows for a quantitative comparison of the health impact of different policy scenarios. We illustrate the outcomes of the tool for the three typical types of tobacco control interventions if these were applied in the Netherlands. The tool was used to model the effects of different types of smoking interventions on future smoking prevalence and on health outcomes, comparing these three scenarios with the business-as-usual scenario. The necessary data input was obtained from the DYNAMO-HIA database which was assembled as part of this project. All smoking interventions will be effective in the long run. The population-wide strategy will be most effective in both the short and long term. The smoking cessation scenario will be second-most effective in the short run, though in the long run the smoking initiation scenario will become almost as effective. Interventions aimed at preventing the initiation of smoking need a long time horizon to become manifest in terms of health effects. The outcomes strongly depend on the groups targeted by the intervention. Conclusion We calculated how much more effective the population-wide strategy is, in both the short and long term, compared to quit smoking interventions and measures aimed at preventing the initiation of smoking. By allowing a great variety of user-specified choices, the DYNAMO-HIA tool is a powerful instrument by which the consequences of different tobacco control policies and interventions can be assessed. PMID:22384230

  20. High-resolution dynamical downscaling of the future Alpine climate

    NASA Astrophysics Data System (ADS)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph

    2017-04-01

    The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.

  1. A Summary of the NASA Lightning Nitrogen Oxides Model (LNOM) and Recent Results

    NASA Technical Reports Server (NTRS)

    Koshak, William; Peterson, Harld

    2011-01-01

    The NASA Marshall Space Flight Center introduced the Lightning Nitrogen Oxides Model (LNOM) a couple of years ago to combine routine state-of-the-art measurements of lightning with empirical laboratory results of lightning NOx production. The routine measurements included VHF lightning source data [such as from the North Alabama Lightning Mapping Array (LMA)], and ground flash location, peak current, and stroke multiplicity data from the National Lightning Detection Network(TradeMark) (NLDN). Following these initial runs of LNOM, the model was updated to include several non-return stroke lightning NOx production mechanisms, and provided the impact of lightning NOx on an August 2006 run of CMAQ. In this study, we review the evolution of the LNOM in greater detail and discuss the model?s latest upgrades and applications. Whereas previous applications were limited to five summer months of data for North Alabama thunderstorms, the most recent LNOM analyses cover several years. The latest statistics of ground and cloud flash NOx production are provided.

  2. Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.

    PubMed

    Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas

    2004-08-01

    The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.

  3. Predictability of a Coupled Model of ENSO Using Singular Vector Analysis: Optimal Growth and Forecast Skill.

    NASA Astrophysics Data System (ADS)

    Xue, Yan

    The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.

  4. Impact of landuse/land cover change on run-off in the catchment of a hydro power project

    NASA Astrophysics Data System (ADS)

    Khare, Deepak; Patra, Diptendu; Mondal, Arun; Kundu, Sananda

    2017-05-01

    The landuse/land cover change and rainfall have a significant influence on the hydrological response of the river basins. The run-off characteristics are changing naturally due to reduction of initial abstraction that increases the run-off volume. Therefore, it is necessary to quantify the changes in the run-off characteristics of a catchment under the influence of changed landuse/land cover. Soil conservation service model has been used in the present study to analyse the impact of various landuse/land cover (past, present and future time period) change in the run-off characteristics of a part of Narmada basin at the gauge discharge site of Mandaleswar in Madhya Pradesh, India. Calculated run-off has been compared with the observed run-off data for the study. The landuse/land cover maps of 1990, 2000 and 2009 have been prepared by digital classification method with proper accuracy using satellite imageries. The impact of the run-off change on hydro power potential has been assessed in the study along with the estimation of the future changes in hydro power potential. Five types of conditions (+10, +5 %, average, -5, -10 % of average rainfall) have been applied with 90 and 75 % dependability status. The generated energy will be less in 90 % dependable flow in respect to the 75 % dependable flow. This work will be helpful for future planning related to establishment of hydropower setup.

  5. Learnable Models for Information Diffusion and its Associated User Behavior in Micro-blogosphere

    DTIC Science & Technology

    2012-08-30

    According to the work of Even-Dar and Shapira (2007), we recall the definition of the ba- sic voter model on network G. In the model, each node of G...reason as follows. We started with the K distinct initial nodes and all the other nodes were neutral in the beginning. Recall that we set the average time... memory , running under Linux. Learning to predict opinion share and detect anti-majority opinionists in social networks 29 7 Conclusion Unlike the popular

  6. Considerations for initiating and progressing running programs in obese individuals.

    PubMed

    Vincent, Heather K; Vincent, Kevin R

    2013-06-01

    Running has rapidly increased in popularity and elicits numerous health benefits, including weight loss. At present, no practical guidelines are available for obese persons who wish to start a running program. This article is a narrative review of the emerging evidence of the musculoskeletal factors to consider in obese patients who wish to initiate a running program and increase its intensity. Main program goals should include gradual weight loss, avoidance of injury, and enjoyment of the exercise. Pre-emptive strengthening exercises can improve the strength of the foot and ankle, hip abductor, quadriceps, and trunk to help support the joints bearing the loads before starting a running program. Depending on the presence of comorbid joint pain, nonimpact exercise or walking (on a flat surface, on an incline, and at high intensity) can be used to initiate the program. For progression to running, intensity or mileage increases should be slow and consistent to prevent musculoskeletal injury. A stepwise transition to running at a rate not exceeding 5%-10% of weekly mileage or duration is reasonable for this population. Intermittent walk-jog programs are also attractive for persons who are not able to sustain running for a long period. Musculoskeletal pain should neither carry over to the next day nor be increased the day after exercising. Rest days in between running sessions may help prevent overuse injury. Patients who have undergone bariatric surgery and are now lean can also run, but special foci such as hydration and energy replacement must be considered. In summary, obese persons can run for exercise, provided they follow conservative transitions and progression, schedule rest days, and heed onset of pain symptoms. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  7. Impacts of dynamical ocean coupling in MJO experiments using NICAM/NICOCO

    NASA Astrophysics Data System (ADS)

    Miyakawa, T.

    2016-12-01

    The cloud-system resolving atmosphereic model NICAM has been successfull in producing Madden-Julian Oscillations(MJOs), having it's prediction skill estimated to be about 4 weeks in a series of hindcast experiments for winter MJO events during 2003-2012 (Miyakawa et al. 2014). A simple mixed-layer ocean model has been applied with nudging towards a prescribed "persistent anomaly SST", which maintains the initial anomaly with a time-varying climatological seasonal cycle. This setup enables the model to interact with an ocean with reasonably realistic SST, and also run in a "forecast mode", without using any observational information after the initial date. A limitation is that under this setup, the model skill drops if the oceanic anomaly rapidly changes after the initial date in the real world. Here we run a recently developed, full 3D-ocean coupled version NICAM-COCO (NICOCO) and explore its impact on MJO simulations. Dynamical ocean models can produce oceanic waves/currents, but will also have a bias and drift away from reality. In a sub-seasonal simulation (an initial problem), it is essential to compare the merit of having better represented oceanic signals and the demerit of bias/drift. A test case simulation series featuring an MJO that triggered the abrupt termination of a major El Nino in 1998 shows that the abrupt termination occurs in all 9 simulation members, highlighting the merit of ocean coupling. However, this is a case where oceanic signals are at its extremes. We carried out an estimation of MJO prediction skill for a preliminary 1-degree mesh ocean version of NICOCO in a similar manner to Miyakawa et al. (2014). The MJO skill was degraded for simulations that was initialized at RMM phases 1 and 2 (corresponding to the Indian Ocean), while those initialized at phase 8 (Africa) was not strongly affected. The tendency of the model ocean to overestimate the Maritime Continent warm pool SST possibly delays the eastward propagation of MJO convective envelope, accounting for the degrade of prediction skills (phases 1 and 2). Reference:Madden-Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer. Miyakawa, T., M. Satoh, H. Miura, H. Tomita, H. Yashiro, A. T. Noda, Y. Yamada, C. Kodama, M. Kimoto & K. Yoneyama. Nature Comm. 5, 3769, doi:10.1038/ncomms4769.

  8. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    ERIC Educational Resources Information Center

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  9. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    NASA Astrophysics Data System (ADS)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  10. Ditching Tests of a 1/24-Scale Model of the Lockheed XR60-1 Airplane, TED No. NACA 235

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J.; Cederborg, Gibson A.

    1948-01-01

    The ditching characteristics of the Lockheed XR60-1 airplane were determined by tests of a 1/24-scale dynamic model in calm water at the Langley tank no. 2 monorail. Various landing attitudes, flap settings, speeds, and conditions of damager were investigated. The ditching behavior was evaluated from recordings of decelerations, length of runs, and motions of the model. Scale-strength bottoms and simulated crumpled bottoms were used to reproduce probable damage to the fuselage. It was concluded that the airplane should be ditched at a landing attitude of about 5 deg with flaps full down. At this attitude, the maximum longitudinal deceleration should not exceed 2g and the landing run will be bout three fuselage lengths. Damage to the fuselage will not be excessive and will be greatest near the point of initial contact with the water.

  11. Improved Modeling of Land-Atmosphere Interactions using a Coupled Version of WRF with the Land Information System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; LaCasse, Katherine M.; Santanello, Joseph A., Jr.; Lapenta, William M.; Petars-Lidard, Christa D.

    2007-01-01

    The exchange of energy and moisture between the Earth's surface and the atmospheric boundary layer plays a critical role in many hydrometeorological processes. Accurate and high-resolution representations of surface properties such as sea-surface temperature (SST), vegetation, soil temperature and moisture content, and ground fluxes are necessary to better understand the Earth-atmosphere interactions and improve numerical predictions of weather and climate phenomena. The NASA/NWS Short-term Prediction Research and Transition (SPORT) Center is currently investigating the potential benefits of assimilating high-resolution datasets derived from the NASA moderate resolution imaging spectroradiometer (MODIS) instruments using the Weather Research and Forecasting (WRF) model and the Goddard Space Flight Center Land Information System (LIS). The LIS is a software framework that integrates satellite and ground-based observational and modeled data along with multiple land surface models (LSMs) and advanced computing tools to accurately characterize land surface states and fluxes. The LIS can be run uncoupled to provide a high-resolution land surface initial condition, and can also be run in a coupled mode with WRF to integrate surface and soil quantities using any of the LSMs available in LIS. The LIS also includes the ability to optimize the initialization of surface and soil variables by tuning the spin-up time period and atmospheric forcing parameters, which cannot be done in the standard WRF. Among the datasets available from MODIS, a leaf-area index field and composite SST analysis are used to improve the lower boundary and initial conditions to the LIS/WRF coupled model over both land and water. Experiments will be conducted to measure the potential benefits from using the coupled LIS/WRF model over the Florida peninsula during May 2004. This month experienced relatively benign weather conditions, which will allow the experiments to focus on the local and mesoscale impacts of the high-resolution MODIS datasets and optimized soil and surface initial conditions. Follow-on experiments will examine the utility of such an optimized WRF configuration for more complex weather scenarios such as convective initiation. This paper will provide an overview of the experiment design and present preliminary results from selected cases in May 2004.

  12. Targeting the right input data to improve crop modeling at global level

    NASA Astrophysics Data System (ADS)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management practices, initial soil conditions, and soil characteristics information. Management practices were represented by planting date and the amount of fertilizer, initial conditions estimates for initial nitrogen, soil water, and stable soil carbon, and soil information is based on a simplified version of the WISE database, characterized by soil organic matter, texture and soil depth. We considered these factors as the most important determinants of nutrient supply to crops during their growing season. Our first global results demonstrate that the model is most sensitive to the initial conditions in terms of soil carbon and nitrogen (CN): wheat yields decreased by 45% when soil CN is null and increase by 15% when twice the soil CN content of the reference run is used. The yields did not appear to be very sensitive to initial soil water conditions, varying from 0% yield increase when initial soil water is set to wilting point to 6% yield increase when it was set to field capacity. They are slightly sensitive to nitrogen application: 8% yield decrease when no N is applied to 9% yield increase when 150 kg.ha-1 is applied. However, with closer examination of results, the model is more sensitive to nitrogen application than to initial soil CN content in Vietnam, Thailand and Japan compared to the rest of the world. More analyses per region and results on the planting dates and soil properties will be presented.

  13. Intercomparison of model response and internal variability across climate model ensembles

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Ganguly, Auroop R.

    2017-10-01

    Characterization of climate uncertainty at regional scales over near-term planning horizons (0-30 years) is crucial for climate adaptation. Climate internal variability (CIV) dominates climate uncertainty over decadal prediction horizons at stakeholders' scales (regional to local). In the literature, CIV has been characterized indirectly using projections of climate change from multi-model ensembles (MME) instead of directly using projections from multiple initial condition ensembles (MICE), primarily because adequate number of initial condition (IC) runs were not available for any climate model. Nevertheless, the recent availability of significant number of IC runs from one climate model allows for the first time to characterize CIV directly from climate model projections and perform a sensitivity analysis to study the dominance of CIV compared to model response variability (MRV). Here, we measure relative agreement (a dimensionless number with values ranging between 0 and 1, inclusive; a high value indicates less variability and vice versa) among MME and MICE and find that CIV is lower than MRV for all projection time horizons and spatial resolutions for precipitation and temperature. However, CIV exhibits greater dominance over MRV for seasonal and annual mean precipitation at higher latitudes where signals of climate change are expected to emerge sooner. Furthermore, precipitation exhibits large uncertainties and a rapid decline in relative agreement from global to continental, regional, or local scales for MICE compared to MME. The fractional contribution of uncertainty due to CIV is invariant for precipitation and decreases for temperature as lead time progresses towards the end of the century.

  14. Data Assimilation Cycling for Weather Analysis

    NASA Technical Reports Server (NTRS)

    Tran, Nam; Li, Yongzuo; Fitzpatrick, Patrick

    2008-01-01

    This software package runs the atmospheric model MM5 in data assimilation cycling mode to produce an optimized weather analysis, including the ability to insert or adjust a hurricane vortex. The program runs MM5 through a cycle of short forecasts every three hours where the vortex is adjusted to match the observed hurricane location and storm intensity. This technique adjusts the surrounding environment so that the proper steering current and environmental shear are achieved. MM5cycle uses a Cressman analysis to blend observation into model fields to get a more accurate weather analysis. Quality control of observations is also done in every cycle to remove bad data that may contaminate the analysis. This technique can assimilate and propagate data in time from intermittent and infrequent observations while maintaining the atmospheric field in a dynamically balanced state. The software consists of a C-shell script (MM5cycle.driver) and three FORTRAN programs (splitMM5files.F, comRegrid.F, and insert_vortex.F), and are contained in the pre-processor component of MM5 called "Regridder." The model is first initialized with data from a global model such as the Global Forecast System (GFS), which also provides lateral boundary conditions. These data are separated into single-time files using splitMM5.F. The hurricane vortex is then bogussed in the correct location and with the correct wind field using insert_vortex.F. The modified initial and boundary conditions are then recombined into the model fields using comRegrid.F. The model then makes a three-hour forecast. The three-hour forecast data from MM5 now become the analysis for the next short forecast run, where the vortex will again be adjusted. The process repeats itself until the desired time of analysis is achieved. This code can also assimilate observations if desired.

  15. Thermal Modeling Method Improvements for SAGE III on ISS

    NASA Technical Reports Server (NTRS)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was only built for the in-air cases allowed correlation of all testing to be done in a single model. These modeling improvements and more will be described and illustrated in the paper.

  16. Loading rate increases during barefoot running in habitually shod runners: Individual responses to an unfamiliar condition.

    PubMed

    Tam, Nicholas; Astephen Wilson, Janie L; Coetzee, Devon R; van Pletsen, Leanri; Tucker, Ross

    2016-05-01

    The purpose of this study was to examine the effect of barefoot running on initial loading rate (LR), lower extremity joint kinematics and kinetics, and neuromuscular control in habitually shod runners with an emphasis on the individual response to this unfamiliar condition. Kinematics and ground reaction force data were collected from 51 habitually shod runners during overground running in a barefoot and shod condition. Joint kinetics and stiffness were calculated with inverse dynamics. Inter-individual initial LR variability was explored by separating individuals by a barefoot/shod ratio to determine acute responders/non-responders. Mean initial LR was 54.1% greater in the barefoot when compared to the shod condition. Differences between acute responders/non-responders were found at peak and initial contact sagittal ankle angle and at initial ground contact. Correlations were found between barefoot sagittal ankle angle at initial ground contact and barefoot initial LR. A large variability in biomechanical responses to an acute exposure to barefoot running was found. A large intra-individual variability was found in initial LR but not ankle plantar-dorsiflexion between footwear conditions. A majority of habitually shod runners do not exhibit previously reported benefits in terms of reduced initial LRs when barefoot. Lastly, runners who increased LR when barefoot reduced LRs when wearing shoes to levels similar seen in habitually barefoot runners who do adopt a forefoot-landing pattern, despite increased dorsiflexion. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A generic multibody simulation

    NASA Technical Reports Server (NTRS)

    Hopping, K. A.; Kohn, W.

    1986-01-01

    Described is a dynamic simulation package which can be configured for orbital test scenarios involving multiple bodies. The rotational and translational state integration methods are selectable for each individual body and may be changed during a run if necessary. Characteristics of the bodies are determined by assigning components consisting of mass properties, forces, and moments, which are the outputs of user-defined environmental models. Generic model implementation is facilitated by a transformation processor which performs coordinate frame inversions. Transformations are defined in the initialization file as part of the simulation configuration. The simulation package includes an initialization processor, which consists of a command line preprocessor, a general purpose grammar, and a syntax scanner. These permit specifications of the bodies, their interrelationships, and their initial states in a format that is not dependent on a particular test scenario.

  18. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated into the KMD-WRF runs, using the product generated by NOAA/NESDIS. Model verification capabilities are also being transitioned to KMD using NCAR's Model *Corresponding author address: Jonathan Case, ENSCO, Inc., 320 Sparkman Dr., Room 3008, Huntsville, AL, 35805. Email: Jonathan.Case-1@nasa.gov Evaluation Tools (MET; Brown et al. 2009) software in conjunction with a SPoRT-developed scripting package, in order to quantify and compare errors in simulated temperature, moisture and precipitation in the experimental WRF model simulations. This extended abstract and accompanying presentation summarizes the efforts and training done to date to support this unique regional modeling initiative at KMD. To honor the memory of Dr. Peter J. Lamb and his extensive efforts in bolstering weather and climate science and capacity-building in Africa, we offer this contribution to the special Peter J. Lamb symposium. The remainder of this extended abstract is organized as follows. The collaborating international organizations involved in the project are presented in Section 2. Background information on the unique land surface input datasets is presented in Section 3. The hands-on training sessions from March 2014 and June 2015 are described in Section 4. Sample experimental WRF output and verification from the June 2015 training are given in Section 5. A summary is given in Section 6, followed by Acknowledgements and References.

  19. Simple and conditional visual discrimination with wheel running as reinforcement in rats.

    PubMed

    Iversen, I H

    1998-09-01

    Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.

  20. Establishing and running a trauma and dissociation unit: a contemporary experience.

    PubMed

    Middleton, Warwick; Higson, David

    2004-12-01

    To evaluate the functioning of a trauma and dissociation unit that has run for the past 8 years in a private hospital, with particular regard to operating philosophy, operating parameters, challenges encountered, research and educational initiatives, and the applicability of the treatment model to other settings. Despite the challenges associated with significant difficulties in the corporate management of a private health-care system, it has been possible to operate an inpatient and day hospital programme tailored to the needs of patients in the dissociative spectrum, and the lessons learnt from this experience are valid considerations in the future planning of mental health services overall.

  1. Interactions between hyporheic flow produced by stream meanders, bars, and dunes

    USGS Publications Warehouse

    Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.

    2013-01-01

    Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.

  2. The Sunk Cost Effect with Pigeons: Some Determinants of Decisions about Persistence

    ERIC Educational Resources Information Center

    Macaskill, Anne C.; Hackenberg, Timothy D.

    2012-01-01

    The sunk cost effect occurs when an individual persists following an initial investment, even when persisting is costly in the long run. The current study used a laboratory model of the sunk cost effect. Two response alternatives were available: Pigeons could persist by responding on a schedule key with mixed ratio requirements, or escape by…

  3. A candidate secular variation model for IGRF-12 based on Swarm data and inverse geodynamo modelling

    NASA Astrophysics Data System (ADS)

    Fournier, Alexandre; Aubert, Julien; Thébault, Erwan

    2015-05-01

    In the context of the 12th release of the international geomagnetic reference field (IGRF), we present the methodology we followed to design a candidate secular variation model for years 2015-2020. An initial geomagnetic field model centered around 2014.3 is first constructed, based on Swarm magnetic measurements, for both the main field and its instantaneous secular variation. This initial model is next fed to an inverse geodynamo modelling framework in order to specify, for epoch 2014.3, the initial condition for the integration of a three-dimensional numerical dynamo model. The initialization phase combines the information contained in the initial model with that coming from the numerical dynamo model, in the form of three-dimensional multivariate statistics built from a numerical dynamo run unconstrained by data. We study the performance of this novel approach over two recent 5-year long intervals, 2005-2010 and 2009-2014. For a forecast horizon of 5 years, shorter than the large-scale secular acceleration time scale (˜10 years), we find that it is safer to neglect the flow acceleration and to assume that the flow determined by the initialization is steady. This steady flow is used to advance the three-dimensional induction equation forward in time, with the benefit of estimating the effects of magnetic diffusion. The result of this deterministic integration between 2015.0 and 2020.0 yields our candidate average secular variation model for that time frame, which is thus centered on 2017.5.

  4. Impacts of initial convective structure on subsequent squall line evolution

    NASA Astrophysics Data System (ADS)

    Varble, A.; Morrison, H.; Zipser, E. J.

    2017-12-01

    A Weather Research and Forecasting simulation of the 20 May 2011 MC3E squall line using 750-m horizontal grid spacing produces wide convective regions with strongly upshear tilted convective updrafts and mesoscale bowing segments that are not produced in radar observations. Similar features occur across several different bulk microphysics schemes, despite surface observations exhibiting cold pool equivalent potential temperature drops that are similar to and pressure rises that are greater than those in the simulation. Observed rear inflow remains more elevated than simulated, partly counteracting the cold pool circulation, whereas the simulated rear inflow descends to low levels, maintaining its strength and reinforcing the cold pool circulation that overpowers the pre-squall line low level vertical wind shear. The descent and strength of the simulated rear inflow is fueled by strong latent cooling caused by large ice water contents detrained from upshear tilted convective cores that accumulate at the rear of the stratiform region. This simulated squall evolution is sensitive to model resolution, which is too coarse to resolve individual convective drafts. Nesting a 250-m horizontal grid spacing domain into the 750-m domain substantially alters the initial convective cells with reduced latent cooling, weaker convective downdrafts, and a weaker initial cold pool. As the initial convective cells develop into a squall line, the rear inflow remains more elevated in the 250-m domain with a cold pool that eventually develops to be just as strong and deeper than the one in the 750-m run. Despite this, the convective cores remain more upright in the 250-m run with the rear inflow partly counteracting the cold pool circulation, whereas the 750-m rear inflow near the surface reinforces the shallower cold pool and causes bowing in the squall line. The different structure in the 750-m run produces excessive mid-level front-to-rear detrainment that widens the convective region relative to the 250-m run and observations while continuing the cycle of excessive latent cooling and rear inflow descent at the rear of the stratiform region in a positive feedback. The causes of initial convective structure differences that produce the divergence in simulated squall line evolutions are explored.

  5. An OpenMI Implementation of a Water Resources System using Simple Script Wrappers

    NASA Astrophysics Data System (ADS)

    Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.

    2013-12-01

    This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.

  6. Users Manual for the Geospatial Stream Flow Model (GeoSFM)

    USGS Publications Warehouse

    Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James

    2008-01-01

    The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.

  7. Climate mitigation: sustainable preferences and cumulative carbon

    NASA Astrophysics Data System (ADS)

    Buckle, Simon

    2010-05-01

    We develop a stylized AK growth model with both climate damages to ecosystem goods and services and sustainable preferences that allow trade-offs between present discounted utility and long-run climate damages. The simplicity of the model permits analytical solutions. Concern for the long-term provides a strong driver for mitigation action. One plausible specification of sustainable preferences leads to the result that, for a range of initial parameter values, an optimizing agent would choose a level of cumulative carbon dioxide (CO2) emissions independent of initial production capital endowment and CO2 levels. There is no technological change so, for economies with sufficiently high initial capital and CO2 endowments, optimal mitigation will lead to disinvestment. For lower values of initial capital and/or CO2 levels, positive investment can be optimal, but still within the same overall level of cumulative emissions. One striking aspect of the model is the complexity of possible outcomes, in addition to these optimal solutions. We also identify a resource constrained region and several regions where climate damages exceed resources available for consumption. Other specifications of sustainable preferences are discussed, as is the case of a hard constraint on long-run damages. Scientists are currently highlighting the potential importance of the cumulative carbon emissions concept as a robust yet flexible target for climate policymakers. This paper shows that it also has an ethical interpretation: it embodies an implicit trade off in global welfare between present discounted welfare and long-term climate damages. We hope that further development of the ideas presented here might contribute to the research and policy debate on the critical areas of intra- and intergenerational welfare.

  8. Results of the Greenland ice sheet model initialisation experiments: ISMIP6 - initMIP-Greenland

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew

    2017-04-01

    Ice sheet model initialisation has a large effect on projected future sea-level contributions and gives rise to important uncertainties. The goal of this intercomparison exercise for the continental-scale Greenland ice sheet is therefore to compare, evaluate and improve the initialisation techniques used in the ice sheet modelling community. The initMIP-Greenland project is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experimental set-up has been designed to allow comparison of the initial present-day state of the Greenland ice sheet between participating models and against observations. Furthermore, the initial states are tested with two schematic forward experiments to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss results that highlight the wide diversity of data sets, boundary conditions and initialisation techniques used in the community to generate initial states of the Greenland ice sheet.

  9. Predicting the chromatographic retention of polymers: application of the polymer model to poly(styrene/ethylacrylate)copolymers.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2012-02-17

    The retention behavior of a range of statistical poly(styrene/ethylacrylate) copolymers is investigated, in order to determine the possibility to predict retention volumes of these copolymers based on a suitable chromatographic retention model. It was found that the composition of elution in gradient chromatography of the copolymers is closely related to the eluent composition at which, in isocratic chromatography, the transition from elution in adsorption to exclusion mode occurs. For homopolymers this transition takes place at a critical eluent composition at which the molar mass dependence of elution volume vanishes. Thus, similar critical eluent compositions can be defined for statistical copolymers. The existence of a critical eluent composition is further supported by the narrower peak width, indicating that the broad molar mass distribution of the samples does not contribute to the retention volume. It is shown that the existing retention model for homopolymers allows for correct quantitative predictions of retention volumes based on only three appropriate initial experiments. The selection of these initial experiments involves a gradient run and two isocratic experiments, one at the composition of elution calculated from first gradient run and second at a slightly higher eluent strength. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Expansion of the Real-Time SPoRT-Land Information System for NOAA/National Weather Service Situational Awareness and Local Modeling Applications

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L; White, Kristopher D.

    2014-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework (hereafter referred to as the "SPoRT-LIS"). Output from the real-time SPoRT-LIS is used for (1) initializing land surface variables for local modeling applications, and (2) displaying in decision support systems for situational awareness and drought monitoring at select NOAA/National Weather Service (NWS) partner offices. The experimental CONUS run incorporates hourly quantitative precipitation estimation (QPE) from the National Severe Storms Laboratory Multi- Radar Multi-Sensor (MRMS) which will be transitioned into operations at the National Centers for Environmental Prediction (NCEP) in Fall 2014.This paper describes the current and experimental SPoRT-LIS configurations, and documents some of the limitations still remaining through the advent of MRMS precipitation analyses in the SPoRT-LIS land surface model (LSM) simulations.

  11. Regional scale landslide risk assessment with a dynamic physical model - development, application and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh

    2013-04-01

    Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).

  12. Simulations of Hurricane Katrina (2005) with the 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.

    2006-01-01

    Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.

  13. Australia's marine virtual laboratory

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe

    2014-05-01

    In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.

  14. Hydrocarbon polymeric binder for advanced solid propellant

    NASA Technical Reports Server (NTRS)

    Potts, J. E. (Editor); Ashcraft, A. C., Jr.; Wise, E. W.

    1971-01-01

    Various experimental factors were examined to determine the source of difficulty in an isoprene polymerization in the 5-gallon reactor which gave a non-uniform product of low functionality. It was concluded that process improvements relating to initiator and monomer purity were desirable, but that the main difficulty was in the initiator feed system. A new pumping system was installed and an analog simulation of the reactor, feed system and initiator decomposition kinetics was devised which permits the selection of initial initiator concentrations and feed rates to use to give a nearly uniform initiator concentration throughout a polymerization run. An isoprene polymerization was run in which the process improvements were implemented.

  15. The Impact of Soil Moisture Initialization On Seasonal Precipitation Forecasts

    NASA Technical Reports Server (NTRS)

    Koster, R. D.; Suarez, M. J.; Tyahla, L.; Houser, Paul (Technical Monitor)

    2002-01-01

    Some studies suggest that the proper initialization of soil moisture in a forecasting model may contribute significantly to the accurate prediction of seasonal precipitation, especially over mid-latitude continents. In order for the initialization to have any impact at all, however, two conditions must be satisfied: (1) the initial soil moisture anomaly must be "remembered" into the forecasted season, and (2) the atmosphere must respond in a predictable way to the soil moisture anomaly. In our previous studies, we identified the key land surface and atmospheric properties needed to satisfy each condition. Here, we tie these studies together with an analysis of an ensemble of seasonal forecasts. Initial soil moisture conditions for the forecasts are established by forcing the land surface model with realistic precipitation prior to the start of the forecast period. As expected, the impacts on forecasted precipitation (relative to an ensemble of runs that do not utilize soil moisture information) tend to be localized over the small fraction of the earth with all of the required land and atmosphere properties.

  16. The Challenge of Evaluating the Intensity of Short Actions in Soccer: A New Methodological Approach Using Percentage Acceleration.

    PubMed

    Sonderegger, Karin; Tschopp, Markus; Taube, Wolfgang

    2016-01-01

    There are several approaches to quantifying physical load in team sports using positional data. Distances in different speed zones are most commonly used. Recent studies have used acceleration data in addition in order to take short intense actions into account. However, the fact that acceleration decreases with increasing initial running speed is ignored and therefore introduces a bias. The aim of our study was to develop a new methodological approach that removes this bias. For this purpose, percentage acceleration was calculated as the ratio of the maximal acceleration of the action (amax,action) and the maximal voluntary acceleration (amax) that can be achieved for a particular initial running speed (percentage acceleration [%] = amax,action / amax * 100). To define amax, seventy-two highly trained junior male soccer players (17.1 ± 0.6 years) completed maximal sprints from standing and three different constant initial running speeds (vinit; trotting: ~6.0 km·h-1; jogging: ~10.8 km·h-1; running: ~15.0 km·h-1). The amax was 6.01 ± 0.55 from a standing start, 4.33 ± 0.40 from trotting, 3.20 ± 0.49 from jogging and 2.29 ± 0.34 m·s-2 from running. The amax correlated significantly with vinit (r = -0.98) and the linear regression equation of highly-trained junior soccer players was: amax = -0.23 * vinit + 5.99. Using linear regression analysis, we propose to classify high-intensity actions as accelerations >75% of the amax, corresponding to acceleration values for our population of >4.51 initiated from standing, >3.25 from trotting, >2.40 from jogging, and >1.72 m·s-2 from running. The use of percentage acceleration avoids the bias of underestimating actions with high and overestimating actions with low initial running speed. Furthermore, percentage acceleration allows determining individual intensity thresholds that are specific for one population or one single player.

  17. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae)

    PubMed Central

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Abstract Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a ‘delay phase’, an ‘increase phase’ and a ‘stagnation phase’ were identified. In general, the runs of succession could be described by four different parameters: (1) ‘Initial degradation level’, (2) ‘delay’, (3) ‘increase rate’ and (4) ‘recovery level’. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession. PMID:21738419

  18. Hydrodynamic instability of elastic-plastic solid plates at the early stage of acceleration.

    PubMed

    Piriz, A R; Sun, Y B; Tahir, N A

    2015-03-01

    A model is presented for the linear Rayleigh-Taylor instability taking place at the early stage of acceleration of an elastic-plastic solid, when the shock wave is still running into the solid and is driven by a time varying pressure on the interface. When the the shock is formed sufficiently close to the interface, this stage is considered to follow a previous initial phase controlled by the Ritchmyer-Meshkov instability that settles new initial conditions. The model reproduces the behavior of the instability observed in former numerical simulation results and provides a relatively simpler physical picture than the currently existing one for this stage of the instability evolution.

  19. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  20. Development and Implementation of Dynamic Scripts to Support Local Model Verification at National Weather Service Weather Forecast Offices

    NASA Technical Reports Server (NTRS)

    Zavordsky, Bradley; Case, Jonathan L.; Gotway, John H.; White, Kristopher; Medlin, Jeffrey; Wood, Lance; Radell, Dave

    2014-01-01

    Local modeling with a customized configuration is conducted at National Weather Service (NWS) Weather Forecast Offices (WFOs) to produce high-resolution numerical forecasts that can better simulate local weather phenomena and complement larger scale global and regional models. The advent of the Environmental Modeling System (EMS), which provides a pre-compiled version of the Weather Research and Forecasting (WRF) model and wrapper Perl scripts, has enabled forecasters to easily configure and execute the WRF model on local workstations. NWS WFOs often use EMS output to help in forecasting highly localized, mesoscale features such as convective initiation, the timing and inland extent of lake effect snow bands, lake and sea breezes, and topographically-modified winds. However, quantitatively evaluating model performance to determine errors and biases still proves to be one of the challenges in running a local model. Developed at the National Center for Atmospheric Research (NCAR), the Model Evaluation Tools (MET) verification software makes performing these types of quantitative analyses easier, but operational forecasters do not generally have time to familiarize themselves with navigating the sometimes complex configurations associated with the MET tools. To assist forecasters in running a subset of MET programs and capabilities, the Short-term Prediction Research and Transition (SPoRT) Center has developed and transitioned a set of dynamic, easily configurable Perl scripts to collaborating NWS WFOs. The objective of these scripts is to provide SPoRT collaborating partners in the NWS with the ability to evaluate the skill of their local EMS model runs in near real time with little prior knowledge of the MET package. The ultimate goal is to make these verification scripts available to the broader NWS community in a future version of the EMS software. This paper provides an overview of the SPoRT MET scripts, instructions for how the scripts are run, and example use cases.

  1. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  2. Evaluation of Convective Transport in the GEOS-5 Chemistry and Climate Model

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.; Ott, Lesley E.; Shi, Jainn J.; Tao. Wei-Kuo; Mari, Celine; Schlager, Hans

    2011-01-01

    The NASA Goddard Earth Observing System (GEOS-5) Chemistry and Climate Model (CCM) consists of a global atmospheric general circulation model and the combined stratospheric and tropospheric chemistry package from the NASA Global Modeling Initiative (GMI) chemical transport model. The subgrid process of convective tracer transport is represented through the Relaxed Arakawa-Schubert parameterization in the GEOS-5 CCM. However, substantial uncertainty for tracer transport is associated with this parameterization, as is the case with all global and regional models. We have designed a project to comprehensively evaluate this parameterization from the point of view of tracer transport, and determine the most appropriate improvements that can be made to the GEOS-5 convection algorithm, allowing improvement in our understanding of the role of convective processes in determining atmospheric composition. We first simulate tracer transport in individual observed convective events with a cloud-resolving model (WRF). Initial condition tracer profiles (CO, CO2, O3) are constructed from aircraft data collected in undisturbed air, and the simulations are evaluated using aircraft data taken in the convective anvils. A single-column (SCM) version of the GEOS-5 GCM with online tracers is then run for the same convective events. SCM output is evaluated based on averaged tracer fields from the cloud-resolving model. Sensitivity simulations with adjusted parameters will be run in the SCM to determine improvements in the representation of convective transport. The focus of the work to date is on tropical continental convective events from the African Monsoon Multidisciplinary Analyses (AMMA) field mission in August 2006 that were extensively sampled by multiple research aircraft.

  3. Adaptive use of research aircraft data sets for hurricane forecasts

    NASA Astrophysics Data System (ADS)

    Biswas, M. K.; Krishnamurti, T. N.

    2008-02-01

    This study uses an adaptive observational strategy for hurricane forecasting. It shows the impacts of Lidar Atmospheric Sensing Experiment (LASE) and dropsonde data sets from Convection and Moisture Experiment (CAMEX) field campaigns on hurricane track and intensity forecasts. The following cases are used in this study: Bonnie, Danielle and Georges of 1998 and Erin, Gabrielle and Humberto of 2001. A single model run for each storm is carried out using the Florida State University Global Spectral Model (FSUGSM) with the European Center for Medium Range Weather Forecasts (ECMWF) analysis as initial conditions, in addition to 50 other model runs where the analysis is randomly perturbed for each storm. The centers of maximum variance of the DLM heights are located from the forecast error variance fields at the 84-hr forecast. Back correlations are then performed using the centers of these maximum variances and the fields at the 36-hr forecast. The regions having the highest correlations in the vicinity of the hurricanes are indicative of regions from where the error growth emanates and suggests the need for additional observations. Data sets are next assimilated in those areas that contain high correlations. Forecasts are computed using the new initial conditions for the storm cases, and track and intensity skills are then examined with respect to the control forecast. The adaptive strategy is capable of identifying sensitive areas where additional observations can help in reducing the hurricane track forecast errors. A reduction of position error by approximately 52% for day 3 of forecast (averaged over 7 storm cases) over the control runs is observed. The intensity forecast shows only a slight positive impact due to the model’s coarse resolution.

  4. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  5. A statistical model for combustion resonance from a DI diesel engine with applications

    NASA Astrophysics Data System (ADS)

    Bodisco, Timothy; Low Choy, Samantha; Masri, Assaad; Brown, Richard J.

    2015-08-01

    Introduced in this paper is a Bayesian model for isolating the resonant frequency from combustion chamber resonance. The model shown in this paper focused on characterising the initial rise in the resonant frequency to investigate the rise of in-cylinder bulk temperature associated with combustion. By resolving the model parameters, it is possible to determine: the start of pre-mixed combustion, the start of diffusion combustion, the initial resonant frequency, the resonant frequency as a function of crank angle, the in-cylinder bulk temperature as a function of crank angle and the trapped mass as a function of crank angle. The Bayesian method allows for individual cycles to be examined without cycle-averaging-allowing inter-cycle variability studies. Results are shown for a turbo-charged, common-rail compression ignition engine run at 2000 rpm and full load.

  6. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  7. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    PubMed Central

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  8. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    DTIC Science & Technology

    2014-04-30

    experiments (tiles from Supplier, sintered SiC) 15. SUBJECT TERMS Adhesive Layer Effect, .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum...Aluminum (AI5083) □ Impacts by .30cal AP-M2 projectile and are modeled using SPH elements in AutoDyn □ Center strike model validation runs with SiC tiles...View SiC\\ Front View □ Smoothed-particle hydrodynamics ( SPH ) used for al parts J SPH Size 0.4 used initially □ SPH Size 0.2 used to capture

  9. A coupled high-resolution modeling system to simulate biomass burning emissions, plume rise and smoke transport in real time over the contiguous US

    NASA Astrophysics Data System (ADS)

    Ahmadov, R.; Grell, G. A.; James, E.; Freitas, S.; Pereira, G.; Csiszar, I. A.; Tsidulko, M.; Pierce, R. B.; McKeen, S. A.; Saide, P.; Alexander, C.; Benjamin, S.; Peckham, S.

    2016-12-01

    Wildfires can have huge impact on air quality and visibility over large parts of the US. It is quite challenging to accurately predict wildfire air quality given significant uncertainties in modeling of biomass burning (BB) emissions, fire size, plume rise and smoke transport. We developed a new smoke modeling system (HRRR-Smoke) based on the coupled meteorology-chemistry model WRF-Chem. The HRRR-Smoke modeling system uses fire radiative power (FRP) data measured by the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership satellite. Using the FRP data enables predicting fire emissions, fire size and plume rise more accurately. Another advantage of the VIIRS data is the fire detection and characterization at­ high spatial resolution during both day and nighttime. The HRRR-Smoke model is run in real-time for summer 2016 on 3km horizontal grid resolution over CONUS domain by NOAA/ESRL Global Systems Division (GSD). The model simulates advection and mixing of fine particulate matter (PM2.5 or smoke) emitted by calculated BB emissions. The BB emissions include both smoldering and flaming fractions. Fire plume rise is parameterized in an online mode during the model integration. In addition to smoke, anthropogenic emissions of PM2.5 are transported in an inline mode as a passive tracer by HRRR-Smoke. The HRRR-Smoke real-time runs use meteorological fields for initial and lateral boundary conditions from the experimental real-time HRRR(X) numerical weather prediction model also run at NOAA/ESRL/GSD. The model is initialized every 6 hours (00, 06, 12 and 18UTC) daily using newly generated meteorological fields and FRP data obtained during previous 24 hours. Then the model produces meteorological and smoke forecasts for next 36 hours. The smoke fields are cycled from one forecast to the next one. Predicted near-surface and vertically integrated smoke concentrations are visualized online on a web-site: http://rapidrefresh.noaa.gov/HRRRsmoke/In this talk, we discuss the major components of the HRRR-Smoke modeling system. We present modeled smoke fields for some major wildfire cases over the western US in 2016 and discuss the model performance for those cases.

  10. Partitioning the Metabolic Cost of Human Running: A Task-by-Task Approach

    PubMed Central

    Arellano, Christopher J.; Kram, Rodger

    2014-01-01

    Compared with other species, humans can be very tractable and thus an ideal “model system” for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the “cost of generating force” hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be “individually” partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. PMID:24838747

  11. Partitioning the metabolic cost of human running: a task-by-task approach.

    PubMed

    Arellano, Christopher J; Kram, Rodger

    2014-12-01

    Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  12. An assessment of mean annual precipitation in Rajasthan, India needed to maintain Mid-Holocene lakes

    NASA Astrophysics Data System (ADS)

    Gill, E.; Rajagopalan, B.; Molnar, P. H.

    2013-12-01

    Paleo-climate literature reports evidence of freshwater lakes over Rajasthan, a region of northwestern India, during the mid-Holocene (~6ka), where desert conditions prevail in present time. It's suggested that mid-Holocene temperatures were warmer, precipitation was nearly double current levels, and there was an enhanced La Niña-like state. While previous analyses infer the lakes were sustained by generally high precipitation and low evaporation, we provide a systematic analysis on the relevant energy budget quantities and the dynamic relationships between them. We have built a hydrological lake model to reconstruct lake levels throughout the Holocene. Model output is evaporation from the lake. Inputs are precipitation over the lake and catchment runoff, determined using precipitation, Preistley-Taylor evapotranspiration, interception and infiltration. Initial tests of the model have been completed with current climate conditions to ensure accurate behavior. Contemporary runs used station precipitation and temperature data [Rajeevan et al., 2006] for the region surrounding Lake Didwana (27°N 74°E). Digital elevation maps were used to compile lake bathymetry for Lake Didwana. Under current climate conditions, a full Lake Didwana (~ 9 m) empties over the first several years. While lake depth varies yearly, increasing with each monsoon season, variations following the initial decline are minimal (~ × 1.0 m). We ran the model with a 2000-year sequence of precipitation and temperature generated by resampling the observed weather sequences, with a suite of base line fractions of vegetation cover and increased precipitation, with solar insolation appropriate during the mid-Holocene period. Initial runs revealed that precipitation amount and percent of vegetated catchment area influence lake levels, but insolation alone does not. Incrementally changing precipitation (between current levels and a 75% increase) and percent of vegetated area (between 10-90%) reveals that a 50% increase in precipitation alone is not enough to reach the maximum lake levels reported by Enzel et al. [1999] of 7m during the mid-Hoocene. For Lake Didwana to reach maximum levels, both at least 50% more precipitation than today and a vegetated fraction of the catchment of at least 50% is required, but if precipitation were twice that today, and vegetation covered 50% of the area, the lake would have been deeper than 9 m. Future work involves generating precipitation and temperature series for 2000-year long sequences representing the early-, mid-, and late-Holocene using two approaches: k-nearest neighbor and generalized linear model. Using these, we'll run the lake model to determine what combinations of precipitation, evaporation, and other variables are necessary to sustain the lakes. While model runs suggest that monsoon rainfall should increase in a warming world, observations show we are currently in the longest epoch of below-normal south-Asian monsoonal rainfall. By using the mid-Holocene as an analog for a future warming world, this study could expand the understanding of the south-Asian monsoon's potential response to warming.

  13. Practical Nonlinearities

    DTIC Science & Technology

    2016-07-01

    All Initial Designs for Final Fab Run Month 29 Masks and wafers prepared for Final Fab Run Month 30 Start of Final Fab Run Month 35 Completion of...Final Fab Run Month 36 Delivery of devices based on designs from other DEFYS performers Because of momentum from efforts prior to the start of...report (June 2016), our project is completed, with most tasks completed ahead of schedule. For example, the 3rd Fab Run started 5 months early and was

  14. Short-term changes in running mechanics and foot strike pattern after introduction to minimalistic footwear.

    PubMed

    Willson, John D; Bjorhus, Jordan S; Williams, D S Blaise; Butler, Robert J; Porcari, John P; Kernozek, Thomas W

    2014-01-01

    Minimalistic footwear has garnered widespread interest in the running community, based largely on the premise that the footwear may reduce certain running-related injury risk factors through adaptations in running mechanics and foot strike pattern. To examine short-term adaptations in running mechanics among runners who typically run in conventional cushioned heel running shoes as they transition to minimalistic footwear. A 2-week, prospective, observational study. A movement science laboratory. Nineteen female runners with a rear foot strike (RFS) pattern who usually train in conventional running shoes. The participants trained for 20 minutes, 3 times per week for 2 weeks by using minimalistic footwear. Three-dimensional lower extremity running mechanics were analyzed before and after this 2-week period. Hip, knee, and ankle joint kinematics at initial contact; step length; stance time; peak ankle joint moment and joint work; impact peak; vertical ground reaction force loading rate; and foot strike pattern preference were evaluated before and after the intervention. The knee flexion angle at initial contact increased 3.8° (P < .01), but the ankle and hip flexion angles at initial contact did not change after training. No changes in ankle joint kinetics or running temporospatial parameters were observed. The majority of participants (71%), before the intervention, demonstrated an RFS pattern while running in minimalistic footwear. The proportion of runners with an RFS pattern did not decrease after 2 weeks (P = .25). Those runners who chose an RFS pattern in minimalistic shoes experienced a vertical loading rate that was 3 times greater than those who chose to run with a non-RFS pattern. Few systematic changes in running mechanics were observed among participants after 2 weeks of training in minimalistic footwear. The majority of the participants continued to use an RFS pattern after training in minimalistic footwear, and these participants experienced higher vertical loading rates. Continued exposure to these greater loading rates may have detrimental effects over time. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  15. Initial Probabilistic Evaluation of Reactor Pressure Vessel Fracture with Grizzly and Raven

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Hoffman, William; Sen, Sonat

    2015-10-01

    The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled reactor pressure vessels (RPVs). Grizzly can be used to model the thermal/mechanical response of an RPV under transient conditions that would be observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtainmore » stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. These capabilities have been demonstrated previously. A typical RPV is likely to contain a large population of pre-existing flaws introduced during the manufacturing process. This flaw population is characterized stastistically through probability density functions of the flaw distributions. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation during a transient event. This report documents initial work to perform probabilistic analysis of RPV fracture during a PTS event using a combination of the RAVEN risk analysis code and Grizzly. This work is limited in scope, considering only a single flaw with deterministic geometry, but with uncertainty introduced in the parameters that influence fracture toughness. These results are benchmarked against equivalent models run in the FAVOR code. When fully developed, the RAVEN/Grizzly methodology for modeling probabilistic fracture in RPVs will provide a general capability that can be used to consider a wider variety of vessel and flaw conditions that are difficult to consider with current tools. In addition, this will provide access to advanced probabilistic techniques provided by RAVEN, including adaptive sampling and parallelism, which can dramatically decrease run times.« less

  16. A Newton-Krylov solver for fast spin-up of online ocean tracers

    NASA Astrophysics Data System (ADS)

    Lindsay, Keith

    2017-01-01

    We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.

  17. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  18. Understanding and Mitigating Vortex-Dominated, Tip-Leakage and End-Wall Losses in a Transonic Splittered Rotor Stage

    DTIC Science & Technology

    2015-04-23

    blade geometry parameters the TPL design 9   tool was initiated by running the MATLAB script (*.m) Main_SpeedLine_Auto. Main_SpeedLine_Auto...SolidWorks for solid model generation of the blade shapes. Computational Analysis With solid models generated of the gas -path air wedge, automated...287 mm (11.3 in) Constrained by existing TCR geometry Number of Passages 12 None A blade tip-down design approach was used. The outputs of the

  19. Mixing, Combustion, and Other Interface Dominated Flows; Paragraphs 3.2.1 A, B, C and 3.2.2 A

    DTIC Science & Technology

    2014-04-09

    Condensed Matter Physics , (12 2010): 43401. doi: H. Lim, Y. Yu, J. Glimm, X. L. Li, D.H. Sharp. Subgrid Models for Mass and Thermal Diffusion in...zone and a series of radial cracks in solid plates hit by high velocity projectiles). • Only 2D dimensional models • Serial codes for running on single ...exter- nal parallel packages TAO and Global Arrays, developed within DOE high performance computing initiatives. A Schwartz-type overlapping domain

  20. The Navy's First Seasonal Ice Forecasts using the Navy's Arctic Cap Nowcast/Forecast System

    NASA Astrophysics Data System (ADS)

    Preller, Ruth

    2013-04-01

    As conditions in the Arctic continue to change, the Naval Research Laboratory (NRL) has developed an interest in longer-term seasonal ice extent forecasts. The Arctic Cap Nowcast/Forecast System (ACNFS), developed by the Oceanography Division of NRL, was run in forward model mode, without assimilation, to estimate the minimum sea ice extent for September 2012. The model was initialized with varying assimilative ACNFS analysis fields (June 1, July 1, August 1 and September 1, 2012) and run forward for nine simulations using the archived Navy Operational Global Atmospheric Prediction System (NOGAPS) atmospheric forcing fields from 2003-2011. The mean ice extent in September, averaged across all ensemble members was the projected summer ice extent. These results were submitted to the Study of Environmental Arctic Change (SEARCH) Sea Ice Outlook project (http://www.arcus.org/search/seaiceoutlook). The ACNFS is a ~3.5 km coupled ice-ocean model that produces 5 day forecasts of the Arctic sea ice state in all ice covered areas in the northern hemisphere (poleward of 40° N). The ocean component is the HYbrid Coordinate Ocean Model (HYCOM) and is coupled to the Los Alamos National Laboratory Community Ice CodE (CICE) via the Earth System Modeling Framework (ESMF). The ocean and ice models are run in an assimilative cycle with the Navy's Coupled Ocean Data Assimilation (NCODA) system. Currently the ACNFS is being transitioned to operations at the Naval Oceanographic Office.

  1. Collinearly-improved BK evolution meets the HERA data

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-10-03

    In a previous publication, we have established a collinearly-improved version of the Balitsky–Kovchegov (BK) equation, which resums to all orders the radiative corrections enhanced by large double transverse logarithms. Here, we study the relevance of this equation as a tool for phenomenology, by confronting it to the HERA data. To that aim, we first improve the perturbative accuracy of our resummation, by including two classes of single-logarithmic corrections: those generated by the first non-singular terms in the DGLAP splitting functions and those expressing the one-loop running of the QCD coupling. The equation thus obtained includes all the next-to-leading order correctionsmore » to the BK equation which are enhanced by (single or double) collinear logarithms. Furthermore, we then use numerical solutions to this equation to fit the HERA data for the electron–proton reduced cross-section at small Bjorken x. We obtain good quality fits for physically acceptable initial conditions. Our best fit, which shows a good stability up to virtualities as large as Q 2 = 400 GeV 2 for the exchanged photon, uses as an initial condition the running-coupling version of the McLerran–Venugopalan model, with the QCD coupling running according to the smallest dipole prescription.« less

  2. Circumferential distortion modeling of the TF30-P-3 compression system

    NASA Technical Reports Server (NTRS)

    Mazzawy, R. S.; Banks, G. A.

    1977-01-01

    Circumferential inlet pressure and temperature distortion testing of the TF30 P-3 turbofan engine was conducted. The compressor system at the test conditions run was modelled according to a multiple segment parallel compressor model. Aspects of engine operation and distortion configuration modelled include the effects of compressor bleeds, relative pressure-temperature distortion alignment and circumferential distortion extent. Model predictions for limiting distortion amplitudes and flow distributions within the compression system were compared with test results in order to evaluate predicted trends. Relatively good agreement was obtained. The model also identified the low pressure compressor as the stall-initiating component, which was in agreement with the data.

  3. Model for assessment of the velocity and force at the start of sprint race.

    PubMed

    Janjić, Nataša J; Kapor, Darko V; Doder, Dragan V; Petrović, Aleksandar; Jarić, Slobodan

    2017-02-01

    A mathematical model was developed for the assessment of the starting velocity and initial velocity and force of a 100-m sprint, based on a non-homogeneous differential equation with the air resistance proportional to the velocity, and the initial conditions for [Formula: see text], [Formula: see text]The use of this model requires the measurement of reaction time and segmental velocities over the course of the race. The model was validated by comparison with the data obtained from 100-m sprints of men: Carl Lewis (1988), Maurice Green (2001) and Usain Bolt (2009), and women: Florence Griffith-Joyner, Evelyn Ashford and Drechsler Heike (1988) showing a high level of agreement. Combined with the previous work of the authors, the present model allows for the assessment of important physical abilities, such as the exertion of a high starting force, development of high starting velocity and, later on, maximisation of the peak running velocity. These data could be of importance for practitioners to identify possible weaknesses and refine training methods for sprinters and other athletes whose performance depend on rapid movement initiations.

  4. Operational on-line coupled chemical weather forecasts for Europe with WRF/Chem

    NASA Astrophysics Data System (ADS)

    Hirtl, Marcus; Mantovani, Simone; Krüger, Bernd C.; Flandorfer, Claudia; Langer, Matthias

    2014-05-01

    Air quality is a key element for the well-being and quality of life of European citizens. Air pollution measurements and modeling tools are essential for the assessment of air quality according to EU legislation. The responsibilities of ZAMG as the national weather service of Austria include the support of the federal states and the public in questions connected to the protection of the environment in the frame of advisory and counseling services as well as expert opinions. ZAMG conducts daily Air-Quality forecasts using the on-line coupled model WRF/Chem. Meteorology is simulated simultaneously with the emissions, turbulent mixing, transport, transformation, and fate of trace gases and aerosols. The emphasis of the application is on predicting pollutants over Austria. Two domains are used for the simulations: the mother domain covers Europe with a resolution of 12 km, the inner domain includes the alpine region with a horizontal resolution of 4 km; 45 model levels are used in the vertical direction. The model runs 2 times per day for a period of 72 hours and is initialized with ECMWF forecasts. On-line coupled models allow considering two-way interactions between different atmospheric processes including chemistry (both gases and aerosols), clouds, radiation, boundary layer, emissions, meteorology and climate. In the operational set-up direct-, indirect and semi-direct effects between meteorology and air chemistry are enabled. The model is running on the HPCF (High Performance Computing Facility) of the ZAMG. In the current set-up 1248 CPUs are used. As the simulations need a big amount of computing resources, a method to safe I/O-time was implemented. Every MPI task writes all its output into the shared memory filesystem of the compute nodes. Once the WRF/Chem integration is finished, all split NetCDF-files are merged and saved on the global file system. The merge-routine is based on parallel-NetCDF. With this method the model runs about 30% faster on the SGI-ICEX. Different additional external data sources can be used to improve the forecasts. Satellite measurements of the Aerosol Optical Thickness (AOT) and ground-based PM10-measurements are combined to highly-resolved initial fields using regression- and assimilation techniques. The available local emission inventories provided by the different Austrian regional governments were harmonized and are used for the model simulations. A model evaluation for a selected episode in February 2010 is presented with respect to PM10 forecasts. During that month exceedances of PM10-thresholds occurred at many measurement stations of the Austrian network. Different model runs (only model/only ground stations assimilated/satellite and ground stations assimilated) are compared to the respective measurements.

  5. Correlates of adherence to a telephone-based multiple health behavior change cancer preventive intervention for teens: the Healthy for Life Program (HELP).

    PubMed

    Mays, Darren; Peshkin, Beth N; Sharff, McKane E; Walker, Leslie R; Abraham, Anisha A; Hawkins, Kirsten B; Tercyak, Kenneth P

    2012-02-01

    This study examined factors associated with teens' adherence to a multiple health behavior cancer preventive intervention. Analyses identified predictors of trial enrollment, run-in completion, and adherence (intervention initiation, number of sessions completed). Of 104 teens screened, 73% (n = 76) were trial eligible. White teens were more likely to enroll than non-Whites (χ(2)[1] df = 4.49, p = .04). Among enrolled teens, 76% (n = 50) completed the run-in; there were no differences between run-in completers and noncompleters. A majority of run-in completers (70%, n = 35) initiated the intervention, though teens who initiated the intervention were significantly younger than those who did not (p < .05). The mean number of sessions completed was 5.7 (SD = 2.6; maximum = 8). After adjusting for age, teens with poorer session engagement (e.g., less cooperative) completed fewer sessions (B = -1.97, p = .003, R (2) = .24). Implications for adolescent cancer prevention research are discussed.

  6. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  7. Shock Initiation Experiments with Ignition and Growth Modeling on the HMX-Based Explosive LX-14

    NASA Astrophysics Data System (ADS)

    Vandersall, Kevin S.; Dehaven, Martin R.; Strickland, Shawn L.; Tarver, Craig M.; Springer, H. Keo; Cowan, Matt R.

    2017-06-01

    Shock initiation experiments on the HMX-based explosive LX-14 were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and provide a basis for Ignition and Growth reactive flow modeling. A 101 mm diameter gas gun was utilized to initiate the explosive charges with manganin piezoresistive pressure gauge packages placed between sample disks pressed to different densities ( 1.57 or 1.83 g/cm3 that corresponds to 85 or 99% of theoretical maximum density (TMD), respectively). The shock sensitivity was found to increase with decreasing density as expected. Ignition and Growth model parameters were derived that yielded reasonable agreement with the experimental data at both initial densities. The shock sensitivity at the tested densities will be compared to prior work published on other HMX-based formulations. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was funded in part by the Joint DoD-DOE Munitions Program.

  8. Improving Numerical Weather Predictions of Summertime Precipitation Over the Southeastern U.S. Through a High-Resolution Initialization of the Surface State

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Krikishen, Jayanthi; Jedlovec, Gary J.

    2011-01-01

    It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high resolution models. This paper presents model verification results of a case study period from June-August 2008 over the Southeastern U.S. using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the NASA Land Information System (LIS) and sea surface temperature (SST) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spin-up run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer, but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS/MODIS data substantially impact surface and boundary layer properties.

  9. Synthesis of Polysyllabic Sequences of Thai Tones Using a Generative Model of Fundamental Frequency Contours

    NASA Astrophysics Data System (ADS)

    Seresangtakul, Pusadee; Takara, Tomio

    In this paper, the distinctive tones of Thai in running speech are studied. We present rules to synthesize F0 contours of Thai tones in running speech by using the generative model of F0 contours. Along with our method, the pitch contours of Thai polysyllabic words, both disyllabic and trisyllabic words, were analyzed. The coarticulation effect of Thai tones in running speech were found. Based on the analysis of the polysyllabic words using this model, rules are derived and applied to synthesize Thai polysyllabic tone sequences. We performed listening tests to evaluate intelligibility of the rules for Thai tones generation. The average intelligibility scores became 98.8%, and 96.6% for disyllabic and trisyllabic words, respectively. From these result, the rule of the tones' generation was shown to be effective. Furthermore, we constructed the connecting rules to synthesize suprasegmental F0 contours using the trisyllable training rules' parameters. The parameters of the first, the third, and the second syllables were selected and assigned to the initial, the ending, and the remaining syllables in a sentence, respectively. Even such a simple rule, the synthesized phrases/senetences were completely identified in listening tests. The MOSs (Mean Opinion Score) was 3.50 while the original and analysis/synthesis samples were 4.82 and 3.59, respectively.

  10. Pinatubo Emulation in Multiple Models (POEMs): co-ordinated experiments in the ISA-MIP model intercomparison activity component of the SPARC Stratospheric Sulphur and it's Role in Climate initiative (SSiRC)

    NASA Astrophysics Data System (ADS)

    Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina

    2016-04-01

    The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.

  11. Modelling and validation land-atmospheric heat fluxes by using classical surface parameters over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Ma, W.; Ma, Y.; Hu, Z.; Zhong, L.

    2017-12-01

    In this study, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. Firstly our field observation sites will be introduced based on ITPCAS (Institute of Tibetan Plateau Research, Chinese Academy of Sciences). Then, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. The differences between the AMSR-E initialized model runs with the default model configuration and in situ data showed an apparent inconsistency in the model-simulated land surface heat fluxes. The results showed that the soil moisture was sensitive to the specific model configuration. To evaluate and verify the model stability, a long-term modeling study with AMSR-E soil moisture data ingestion was performed. Based on test simulations, AMSR-E data were assimilated into an atmospheric model for July and August 2007. The results showed that the land surface fluxes agreed well with both the in situ data and the results of the default model configuration. Therefore, the simulation can be used to retrieve land surface heat fluxes from an atmospheric model over the Tibetan Plateau.

  12. Stabilization of the wheel running phenotype in mice.

    PubMed

    Bowen, Robert S; Cates, Brittany E; Combs, Eric B; Dillard, Bryce M; Epting, Jessica T; Foster, Brittany R; Patterson, Shawnee V; Spivey, Thomas P

    2016-03-01

    Increased physical activity is well known to improve health and wellness by modifying the risks for many chronic diseases. Rodent wheel running behavior is a beneficial surrogate model to evaluate the biology of daily physical activity in humans. Upon initial exposure to a running wheel, individual mice differentially respond to the experience, which confounds the normal activity patterns exhibited in this otherwise repeatable phenotype. To promote phenotypic stability, a minimum seven-day (or greater) acclimation period is utilized. Although phenotypic stabilization is achieved during this 7-day period, data to support acclimation periods of this length are not currently available in the literature. The purpose of this project is to evaluate the wheel running response in C57BL/6j mice immediately following exposure to a running wheel. Twenty-eight male and thirty female C57BL/6j mice (Jackson Laboratory, Bar Harbor, ME) were acquired at eight weeks of age and were housed individually with free access to running wheels. Wheel running distance (km), duration (min), and speed (m∙min(-1)) were measured daily for fourteen days following initial housing. One-way ANOVAs were used to evaluate day-to-day differences in each wheel running character. Limits of agreement and mean difference statistics were calculated between days 1-13 (acclimating) and day 14 (acclimated) to assess day-to-day agreement between each parameter. Wheel running distance (males: F=5.653, p=2.14 × 10(-9); females: F=8.217, p=1.20 × 10(-14)), duration (males: F=2.613, p=0.001; females: F=4.529, p=3.28 × 10(-7)), and speed (males: F=7.803, p=1.22 × 10(-13); females: F=13.140, p=2.00 × 10(-16)) exhibited day-to-day differences. Tukey's HSD post-hoc testing indicated differences between early (males: days 1-3; females: days 1-6) and later (males: days >3; females: days >6) wheel running periods in distance and speed. Duration only exhibited an anomalous difference between wheel running on day 13 and wheel running on days 1 through 4 in males. In females, duration exhibited anomalous differences due to abnormally depressed wheel running on day 6 and abnormally elevated wheel running on day 14. Limits of agreement and mean difference statistics indicated stable phenotypic variability with an up-trending daily mean for distance and speed that stabilized within the first three days in males and within eight days in females. Duration exhibited stable variability after nine days in males and after seven days in females. Although it is common practice to allow a prolonged (≥ seven day) acclimation period prior to recording wheel running data, the current study suggests that phenotypic stabilization of all three indices is achieved at different times with distance and speed exhibiting stability by day three in males and day eight in females. Duration exhibits stability by day nine in males and day seven in females. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Optimization of green infrastructure network at semi-urbanized watersheds to manage stormwater volume, peak flow and life cycle cost: Case study of Dead Run watershed in Maryland

    NASA Astrophysics Data System (ADS)

    Heidari Haratmeh, B.; Rai, A.; Minsker, B. S.

    2016-12-01

    Green Infrastructure (GI) has become widely known as a sustainable solution for stormwater management in urban environments. Despite more recognition and acknowledgment, researchers and practitioners lack clear and explicit guidelines on how GI practices should be implemented in urban settings. This study is developing a noisy-based multi-objective, multi-scaled genetic algorithm that determines optimal GI networks for environmental, economic and social objectives. The methodology accounts for uncertainty in modeling results and is designed to perform at sub-watershed as well as patch scale using two different simulation models, SWMM and RHESSys, in a Cloud-based implementation using a Web interface. As an initial case study, a semi-urbanized watershed— DeadRun 5— in Baltimore County, Maryland, is selected. The objective of the study is to minimize life cycle cost, maximize human preference for human well-being and the difference between pre-development hydrographs generated from current rainfall events and design storms, as well as those that result from proposed GI scenarios. Initial results for DeadRun5 watershed suggest that placing GI in the proximity of the watershed outlet optimizes life cycle cost, stormwater volume, and peak flow capture. The framework can easily present outcomes of GI design scenarios to both designers and local stakeholders, and future plans include receiving feedback from users on candidate designs, and interactively updating optimal GI network designs in a crowd-sourced design process. This approach can also be helpful in deriving design guidelines that better meet stakeholder needs.

  14. Seasonal-to-decadal predictability in the Nordic Seas and Arctic with the Norwegian Climate Prediction Model

    NASA Astrophysics Data System (ADS)

    Counillon, Francois; Kimmritz, Madlen; Keenlyside, Noel; Wang, Yiguo; Bethke, Ingo

    2017-04-01

    The Norwegian Climate Prediction Model combines the Norwegian Earth System Model and the Ensemble Kalman Filter data assimilation method. The prediction skills of different versions of the system (with 30 members) are tested in the Nordic Seas and the Arctic region. Comparing the hindcasts branched from a SST-only assimilation run with a free ensemble run of 30 members, we are able to dissociate the predictability rooted in the external forcing from the predictability harvest from SST derived initial conditions. The latter adds predictability in the North Atlantic subpolar gyre and the Nordic Seas regions and overall there is very little degradation or forecast drift. Combined assimilation of SST and T-S profiles further improves the prediction skill in the Nordic Seas and into the Arctic. These lead to multi-year predictability in the high-latitudes. Ongoing developments of strongly coupled assimilation (ocean and sea ice) of ice concentration in idealized twin experiment will be shown, as way to further enhance prediction skill in the Arctic.

  15. Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning

    NASA Astrophysics Data System (ADS)

    Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.

    2017-12-01

    Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.

  16. Extension of the PC version of VEPFIT with input and output routines running under Windows

    NASA Astrophysics Data System (ADS)

    Schut, H.; van Veen, A.

    1995-01-01

    The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.

  17. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  18. Expansion of the Real-time Sport-land Information System for NOAA / National Weather Service Situational Awareness and Local Modeling Applications

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; White, Kristopher D.

    2014-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, AL (Jedlovec 2013; Ralph et al. 2013; Merceret et al. 2013) is running a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework (hereafter referred to as the "SPoRT-LIS"). Output from the real-time SPoRT-LIS is used for (1) initializing land surface variables for local modeling applications, and (2) displaying in decision support systems for situational awareness and drought monitoring at select NOAA/National Weather Service (NWS) partner offices. The SPoRT-LIS is currently run over a domain covering the southeastern half of the Continental United States (CONUS), with an additional experimental real-time run over the entire CONUS and surrounding portions of southern Canada and northern Mexico. The experimental CONUS run incorporates hourly quantitative precipitation estimation (QPE) from the National Severe Storms Laboratory Multi- Radar Multi-Sensor (MRMS) product (Zhang et al. 2011, 2014), which will be transitioned into operations at the National Centers for Environmental Prediction (NCEP) in Fall 2014. This paper describes the current and experimental SPoRT-LIS configurations, and documents some of the limitations still remaining through the advent of MRMS precipitation analyses in the SPoRT-LIS land surface model (LSM) simulations. Section 2 gives background information on the NASA LIS and describes the realtime SPoRT-LIS configurations being compared. Section 3 presents recent work done to develop a training module on situational awareness applications of real-time SPoRT-LIS output. Comparisons between output from the two SPoRT-LIS runs are shown in Section 4, including a documentation of issues encountered in using the MRMS precipitation dataset. A summary and future work in given in Section 5, followed by acknowledgements and references.

  19. Particle Size Effects on CL-20 Initiation and Detonation

    NASA Astrophysics Data System (ADS)

    Valancius, Cole; Bainbridge, Joe; Love, Cody; Richardson, Duane

    2017-06-01

    Particle size or specific surface area effects on explosives has been of interest to the explosives community for both application and modeling of initiation and detonation. Different particles sizes of CL-20 were used in detonator experiments to determine the effects of particle size on initiation, run-up to steady state detonation, and steady state detonation. Historical tests have demonstrated a direct relationship between particle size and initiation. However, historical tests inadvertently employed density gradients, making it difficult to discern the effects of particle size from the effects of density. Density gradients were removed from these tests using a larger diameter, shorter charge column, allowing for similar loading across different particle sizes. Without the density gradient, the effects of particle size on initiation and detonation are easier to determine. The results of which contrast with historical results, showing particle size does not directly affect initiation threshold.

  20. Soil Moisture and Snow Cover: Active or Passive Elements of Climate?

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Robertson, Franklin R.; Roads, John O.; Arnold, James E. (Technical Monitor)

    2001-01-01

    A key question in the study of the hydrologic cycle is the extent to which surface effects such as soil moisture and snow cover are simply passive elements or whether they can affect the evolution of climate on seasonal and longer time scales. We have constructed ensembles of predictability studies using the NCAR CCM3 in which we compared the relative roles of initial surface and atmospheric conditions over the central and western U.S. GAPP region in determining the subsequent evolution of soil moisture and of snow cover. We have also made sensitivity studies with exaggerated soil moisture and snow cover anomalies in order to determine the physical processes that may be important. Results from simulations with realistic soil moisture anomalies indicate that internal climate variability may be the strongest factor, with some indication that the initial atmospheric state is also important. The initial state of soil moisture does not appear important, a result that held whether simulations were started in late winter or late spring. Model runs with exaggerated soil moisture reductions (near-desert conditions) showed a much larger effect, with warmer surface temperatures, reduced precipitation, and lower surface pressures; the latter indicating a response of the atmospheric circulation. These results suggest the possibility of a threshold effect in soil moisture, whereby an anomaly must be of a sufficient size before it can have a significant impact on the atmospheric circulation and hence climate. Results from simulations with realistic snow cover anomalies indicate that the time of year can be crucial. When introduced in late winter, these anomalies strongly affected the subsequent evolution of snow cover. When introduced in early winter, however, little or no effect is seen on the subsequent snow cover. Runs with greatly exaggerated initial snow cover indicate that the high reflectivity of snow is the most important process by which snow cover can impact climate, through lower surface temperatures and increased surface pressures. In early winter, the amount of solar radiation is very small and so this albedo, effect is inconsequential while in late winter, with the sun higher in the sky and period of daylight longer, the effect is much stronger. The results to date were obtained for model runs with present-day conditions. We are currently analyzing runs made with projected forcings for the 21st century to see if these results are modified in any way under likely scenarios of future climate change.

  1. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  2. Windfield and trajectory models for tornado-propelled objects. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redmann, G.H.; Radbill, J.R.; Marte, J.E.

    1983-03-01

    This is the final report of a three-phased research project to develop a six-degree-of-freedom mathematical model to predict the trajectories of tornado-propelled objects. The model is based on the meteorological, aerodynamic, and dynamic processes that govern the trajectories of missiles in a tornadic windfield. The aerodynamic coefficients for the postulated missiles were obtained from full-scale wind tunnel tests on a 12-inch pipe and car and from drop tests. Rocket sled tests were run whereby the 12-inch pipe and car were injected into a worst-case tornado windfield in order to verify the trajectory model. To simplify and facilitate the use ofmore » the trajectory model for design applications without having to run the computer program, this report gives the trajectory data for NRC-postulated missiles in tables based on given variables of initial conditions of injection and tornado windfield. Complete descriptions of the tornado windfield and trajectory models are presented. The trajectory model computer program is also included for those desiring to perform trajectory or sensitivity analyses beyond those included in the report or for those wishing to examine other missiles and use other variables.« less

  3. BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.

  4. Case studies using GOES infrared data and a planetary boundary layer model to infer regional scale variations in soil moisture. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Rose, F. G.

    1983-01-01

    Modeled temperature data from a one-dimensional, time-dependent, initial value, planetary boundary layer model for 16 separate model runs with varying initial values of moisture availability are applied, by the use of a regression equation, to longwave infrared GOES satellite data to infer moisture availability over a regional area in the central U.S. This was done for several days during the summers of 1978 and 1980 where a large gradient in the antecedent precipitation index (API) represented the boundary between a drought area and a region of near normal precipitation. Correlations between satellite derived moisture availability and API were found to exist. Errors from the presence of clouds, water vapor and other spatial inhomogeneities made the use of the measurement for anything except the relative degree of moisture availability dubious.

  5. Wheel-running mitigates psychomotor sensitization initiation but not post-sensitization conditioned activity and conditioned place preference induced by cocaine in mice.

    PubMed

    Geuzaine, Annabelle; Tirelli, Ezio

    2014-04-01

    Previous literature suggests that physical exercise allowed by an unlimited access to a running wheel for several weeks can mitigate chronic neurobehavioral responsiveness to several addictive drugs in rodents. Here, the potential preventive effects of unlimited wheel-running on the initiation of psychomotor sensitization and the acquisition and extinction of conditioned place preference (CPP) induced by 10 mg/kg cocaine in C56BL/6J mice were assessed in two independent experiments. To this end, half of the mice were singly housed with a running wheel at 28 days of age for 10 weeks prior to psychopharmacological tests, during which housing conditions did not change, and the other half of mice were housed without running wheel. In Experiment 1, prior to initiating sensitization, psychomotor activity on the two first drug-free once-daily sessions was not affected by wheel-running. This was also found for the acute psychomotor-activating effect of cocaine on the first sensitization session. Psychomotor sensitization readily developed over the 9 following once-daily sessions in mice housed without wheel, whereas it was inhibited in mice housed with a wheel. However, that difference did not transfer to post-sensitization conditioned activity. In contrast with the sensitization results, mice housed with a wheel still expressed a clear-cut CPP which did not extinguish differently from that of the other group, a result in disaccord with previous studies reporting either an attenuating or an increasing effect of wheel-running on cocaine-induced conditioned reward. The available results together indicate that interactions between wheel-running and cocaine effects are far from being satisfactorily characterized. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Prediction and Predictability of the Madden Julian Oscillation in the NASA GEOS-5 Seasonal-to-Subseasonal System

    NASA Technical Reports Server (NTRS)

    Achuthavarier, Deepthi; Koster, Randal; Marshak, Jelena; Schubert, Siegfried; Molod, Andrea

    2018-01-01

    In this study, we examine the prediction skill and predictability of the Madden Julian Oscillation (MJO) in a recent version of the NASA GEOS-5 atmosphere-ocean coupled model run at at 1/2 degree horizontal resolution. The results are based on a suite of hindcasts produced as part of the NOAA SubX project, consisting of seven ensemble members initialized every 5 days for the period 1999-2015. The atmospheric initial conditions were taken from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the ocean and the sea ice were taken from a GMAO ocean analysis. The land states were initialized from the MERRA-2 land output, which is based on observation-corrected precipitation fields. We investigated the MJO prediction skill in terms of the bivariate correlation coefficient for the real-time multivariate MJO (RMM) indices. The correlation coefficient stays at or above 0.5 out to forecast lead times of 26-36 days, with a pronounced increase in skill for forecasts initialized from phase 3, when the MJO convective anomaly is located in the central tropical Indian Ocean. A corresponding estimate of the upper limit of the predictability is calculated by considering a single ensemble member as the truth and verifying the ensemble mean of the remaining members against that. The predictability estimates fall between 35-37 days (taken as forecast lead when the correlation reaches 0.5) and are rather insensitive to the initial MJO phase. The model shows slightly higher skill when the initial conditions contain strong MJO events compared to weak events, although the difference in skill is evident only from lead 1 to 20. Similar to other models, the RMM-index-based skill arises mostly from the circulation components of the index. The skill of the convective component of the index drops to 0.5 by day 20 as opposed to day 30 for circulation fields. The propagation of the MJO anomalies over the Maritime Continent does not appear problematic in the GEOS-5 hindcasts implying that the Maritime Continent predictability barrier may not be a major concern in this model. Finally, the MJO prediction skill in this version of GEOS-5 is superior to that of the current seasonal prediction system at the GMAO; this could be partly attributed to a slightly better representation of the MJO in the free running version of this model and partly to the improved atmospheric initialization from MERRA-2.

  7. Relative Influence of Initial Surface and Atmospheric Conditions on Seasonal Water and Energy Balances

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Roads, John O.; Robertson, Franklin R.; Goodman, H. Michael (Technical Monitor)

    2001-01-01

    We constructed and analyzed wet and dry soil moisture composites for the mid-latitude GCIP region of the central US using long climate model simulations made with the NCAR CCM3 and reanalysis products from NCEP. Using the diagnostic composites as a guide, we have completed a series of predictability experiments in which we imposed soil water initial conditions in CCM3 for the GCIP region for June 1 from anomalously wet and dry years, with atmospheric initial conditions taken from June 1 of a year with 'near-normal' soil water, and initial soil water from the near-normal year and atmospheric initial conditions from the wet and dry years. Preliminary results indicate that the initial state of the atmosphere is more important than the initial state of soil water determining the subsequent late spring and summer evolution of sod water over the GCIP region. Surprisingly, neither the composites or the predictability experiments yielded a strong influence of soil moisture on the atmosphere. To explore this further, we have made runs with extreme dry soil moisture initial anomalies imposed over the GCIP region (the soil close to being completely dry). These runs did yield a very strong effect on the atmosphere that persisted for at least three months. We conclude that the magnitude of the initial soil moisture anomaly is crucial, at least in CCM3, and are currently investigating whether a threshold exists, below which little impact is seen. In a complementary study, we compared the impact of the initial condition of snow cover versus the initial atmospheric state over the western US (corresponding to the westward extension of the GAPP program follow-on to GCIP). In this case, the initial prescription of snow cover is far more important than the initial atmospheric state in determining the subsequent evolution of snow cover. We are currently working to understand the very different soil water and snow cover results.

  8. Predicting ground contact events for a continuum of gait types: An application of targeted machine learning using principal component analysis.

    PubMed

    Osis, Sean T; Hettinga, Blayne A; Ferber, Reed

    2016-05-01

    An ongoing challenge in the application of gait analysis to clinical settings is the standardized detection of temporal events, with unobtrusive and cost-effective equipment, for a wide range of gait types. The purpose of the current study was to investigate a targeted machine learning approach for the prediction of timing for foot strike (or initial contact) and toe-off, using only kinematics for walking, forefoot running, and heel-toe running. Data were categorized by gait type and split into a training set (∼30%) and a validation set (∼70%). A principal component analysis was performed, and separate linear models were trained and validated for foot strike and toe-off, using ground reaction force data as a gold-standard for event timing. Results indicate the model predicted both foot strike and toe-off timing to within 20ms of the gold-standard for more than 95% of cases in walking and running gaits. The machine learning approach continues to provide robust timing predictions for clinical use, and may offer a flexible methodology to handle new events and gait types. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  10. Long-run evolution of the global economy - Part 2: Hindcasts of innovation and growth

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2015-10-01

    Long-range climate forecasts use integrated assessment models to link the global economy to greenhouse gas emissions. This paper evaluates an alternative economic framework outlined in part 1 of this study (Garrett, 2014) that approaches the global economy using purely physical principles rather than explicitly resolved societal dynamics. If this model is initialized with economic data from the 1950s, it yields hindcasts for how fast global economic production and energy consumption grew between 2000 and 2010 with skill scores > 90 % relative to a model of persistence in trends. The model appears to attain high skill partly because there was a strong impulse of discovery of fossil fuel energy reserves in the mid-twentieth century that helped civilization to grow rapidly as a deterministic physical response. Forecasting the coming century may prove more of a challenge because the effect of the energy impulse appears to have nearly run its course. Nonetheless, an understanding of the external forces that drive civilization may help development of constrained futures for the coupled evolution of civilization and climate during the Anthropocene.

  11. Mars Tumbleweed Simulation Using Singular Perturbation Theory

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Calhoun, Phillip

    2005-01-01

    The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.

  12. Climate change impacts on crop yield in the Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Toreti, Andrea; Ceglar, Andrej; Dentener, Frank; Niemeyer, Stefan; Dosio, Alessandro; Fumagalli, Davide

    2017-04-01

    Agriculture is strongly influenced by climate variability, climate extremes and climate changes. Recent studies on past decades have identified and analysed the effects of climate variability and extremes on crop yields in the Euro-Mediterranean region. As these effects could be amplified in a changing climate context, it is essential to analyse available climate projections and investigate the possible impacts on European agriculture in terms of crop yield. In this study, five model runs from the Euro-CORDEX initiative under two scenarios (RCP4.5 and RCP8.5) have been used. Climate model data have been bias corrected and then used to feed a mechanistic crop growth model. The crop model has been run under different settings to better sample the intrinsic uncertainties. Among the main results, it is worth to report a weak but significant and spatially homogeneous increase in potential wheat yield at mid-century (under a CO2 fertilisation effect scenario). While more complex changes seem to characterise potential maize yield, with large areas in the region showing a weak-to-moderate decrease.

  13. Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S. C.; Takacs, L.

    1984-01-01

    Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.

  14. Models of Disease Vector Control: When Can Aggressive Initial Intervention Lower Long-Term Cost?

    PubMed

    Oduro, Bismark; Grijalva, Mario J; Just, Winfried

    2018-04-01

    Insecticide spraying of housing units is an important control measure for vector-borne infections such as Chagas disease. As vectors may invade both from other infested houses and sylvatic areas and as the effectiveness of insecticide wears off over time, the dynamics of (re)infestations can be approximated by [Formula: see text]-type models with a reservoir, where housing units are treated as hosts, and insecticide spraying corresponds to removal of hosts. Here, we investigate three ODE-based models of this type. We describe a dual-rate effect where an initially very high spraying rate can push the system into a region of the state space with low endemic levels of infestation that can be maintained in the long run at relatively moderate cost, while in the absence of an aggressive initial intervention the same average cost would only allow a much less significant reduction in long-term infestation levels. We determine some sufficient and some necessary conditions under which this effect occurs and show that it is robust in models that incorporate some heterogeneity in the relevant properties of housing units.

  15. Simulations of Eurasian winter temperature trends in coupled and uncoupled CFSv2

    NASA Astrophysics Data System (ADS)

    Collow, Thomas W.; Wang, Wanqiu; Kumar, Arun

    2018-01-01

    Conflicting results have been presented regarding the link between Arctic sea-ice loss and midlatitude cooling, particularly over Eurasia. This study analyzes uncoupled (atmosphere-only) and coupled (ocean-atmosphere) simulations by the Climate Forecast System, version 2 (CFSv2), to examine this linkage during the Northern Hemisphere winter, focusing on the simulation of the observed surface cooling trend over Eurasia during the last three decades. The uncoupled simulations are Atmospheric Model Intercomparison Project (AMIP) runs forced with mean seasonal cycles of sea surface temperature (SST) and sea ice, using combinations of SST and sea ice from different time periods to assess the role that each plays individually, and to assess the role of atmospheric internal variability. Coupled runs are used to further investigate the role of internal variability via the analysis of initialized predictions and the evolution of the forecast with lead time. The AMIP simulations show a mean warming response over Eurasia due to SST changes, but little response to changes in sea ice. Individual runs simulate cooler periods over Eurasia, and this is shown to be concurrent with a stronger Siberian high and warming over Greenland. No substantial differences in the variability of Eurasian surface temperatures are found between the different model configurations. In the coupled runs, the region of significant warming over Eurasia is small at short leads, but increases at longer leads. It is concluded that, although the models have some capability in highlighting the temperature variability over Eurasia, the observed cooling may still be a consequence of internal variability.

  16. Multiple Equilibria and Endogenous Cycles in a Non-Linear Harrodian Growth Model

    NASA Astrophysics Data System (ADS)

    Commendatore, Pasquale; Michetti, Elisabetta; Pinto, Antonio

    The standard result of Harrod's growth model is that, because investors react more strongly than savers to a change in income, the long run equilibrium of the economy is unstable. We re-interpret the Harrodian instability puzzle as a local instability problem and integrate his model with a nonlinear investment function. Multiple equilibria and different types of complex behaviour emerge. Moreover, even in the presence of locally unstable equilibria, for a large set of initial conditions the time path of the economy is not diverging, providing a solution to the instability puzzle.

  17. Predictability of short-range forecasting: a multimodel approach

    NASA Astrophysics Data System (ADS)

    García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan

    2011-05-01

    Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).

  18. NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.

  19. High-Resolution Mesoscale Model Setup for the Eastern Range and Wallops Flight Facility

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Zavodsky, Bradley T.

    2015-01-01

    Mesoscale weather conditions can have an adverse effect on space launch, landing, ground processing, and weather advisories, watches, and warnings at the Eastern Range (ER) in Florida and Wallops Flight Facility (WFF) in Virginia. During summer, land-sea interactions across Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) lead to sea breeze front formation, which can spawn deep convection that can hinder operations and endanger personnel and resources. Many other weak locally-driven low-level boundaries and their interactions with the sea breeze front and each other can also initiate deep convection in the KSC/CCAFS area. These convective processes often last 60 minutes or less and pose a significant challenge to the local forecasters. Surface winds during the transition seasons (spring and fall) pose the most difficulties for the forecasters at WFF. They also encounter problems forecasting convective activity and temperature during those seasons. Therefore, accurate mesoscale model forecasts are needed to better forecast a variety of unique weather phenomena. Global and national scale models cannot properly resolve important local-scale weather features at each location due to their horizontal resolutions being much too coarse. Therefore, a properly tuned local data assimilation (DA) and forecast model at a high resolution is needed to provide improved capability. To accomplish this, a number of sensitivity tests were performed using the Weather Research and Forecasting (WRF) model in order to determine the best DA/model configuration for operational use at each of the space launch ranges to best predict winds, precipitation, and temperature. A set of Perl scripts to run the Gridpoint Statistical Interpolation (GSI)/WRF in real-time were provided by NASA's Short-term Prediction Research and Transition Center (SPoRT). The GSI can analyze many types of observational data including satellite, radar, and conventional data. The GSI/WRF scripts use a cycled GSI system similar to the operational North American Mesoscale (NAM) model. The scripts run a 12-hour pre-cycle in which data are assimilated from 12 hours prior up to the model initialization time. A number of different model configurations were tested for both the ER and WFF by varying the horizontal resolution on which the data assimilation was done. Three different grid configurations were run for the ER and two configurations were run for WFF for archive cases from 27 Aug 2013 through 10 Nov 2013. To quantify model performance, standard model output will be compared to the Meteorological Assimilation Data Ingest System (MADIS) data. The MADIS observation data will be compared to the WRF forecasts using the Model Evaluation Tools (MET) verification package. In addition, the National Centers for Environmental Prediction's Stage IV precipitation data will be used to validate the WRF precipitation forecasts. The author will summarize the relative skill of the various WRF configurations and how each configuration behaves relative to the others, as well as determine the best model configuration for each space launch range.

  20. Exploring the speed and performance of molecular replacement with AMPLE using QUARK ab initio protein models.

    PubMed

    Keegan, Ronan M; Bibby, Jaclyn; Thomas, Jens; Xu, Dong; Zhang, Yang; Mayans, Olga; Winn, Martyn D; Rigden, Daniel J

    2015-02-01

    AMPLE clusters and truncates ab initio protein structure predictions, producing search models for molecular replacement. Here, an interesting degree of complementarity is shown between targets solved using the different ab initio modelling programs QUARK and ROSETTA. Search models derived from either program collectively solve almost all of the all-helical targets in the test set. Initial solutions produced by Phaser after only 5 min perform surprisingly well, improving the prospects for in situ structure solution by AMPLE during synchrotron visits. Taken together, the results show the potential for AMPLE to run more quickly and successfully solve more targets than previously suspected.

  1. Air-Sea Heat Flux Transfer for MJO Initiation Processes during DYNAMO/CINDY2011 in Extended-Range Forecasts

    NASA Astrophysics Data System (ADS)

    Hong, X.; Reynolds, C. A.; Doyle, J. D.

    2016-12-01

    In this study, two-sets of monthly forecasts for the period during the Dynamics of Madden-Julian Oscillation (MJO)/Cooperative Indian Ocean Experiment of Intraseasonal Variability (DAYNAMO/CINDY) in November 2011 are examined. Each set includes three forecasts with the first set from Navy Global Environmental Model (NAVGEM) and the second set from Navy's non-hydrostatic Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS®1). Three NAVGEM monthly forecasts have used sea surface temperature (SST) from persistent at the initial time, from Navy Coupled Ocean Data Assimilation (NCODA) analysis, and from coupled NAVGEM-Hybrid Coordinate Ocean Model (HYCOM) forecasts. Examination found that NAVGEM can predict the MJO at 20-days lead time using SST from analysis and from coupled NAVGEM-HYCOM but cannot predict the MJO using the persistent SST, in which a clear circumnavigating signal is absent. Three NAVGEM monthly forecasts are then applied as lateral boundary conditions for three COAMPS monthly forecasts. The results show that all COAMPS runs, including using lateral boundary conditions from the NAVGEM that is without the MJO signal, can predict the MJO. Vertically integrated moisture anomaly and 850-hPa wind anomaly in all COAMPS runs have indicated strong anomalous equatorial easterlies associated with Rossby wave prior to the MJO initiation. Strong surface heat fluxes and turbulence kinetic energy have promoted the convective instability and triggered anomalous ascending motion, which deepens moist boundary layer and develops deep convection into the upper troposphere to form the MJO phase. The results have suggested that air-sea interaction process is important for the initiation and development of the MJO. 1COAMPS® is a registered trademark of the Naval Research Laboratory

  2. Increasing Update Rates in the Building Walkthrough System with Automatic Model-Space Subdivision and Potentially Visible Set Calculations

    DTIC Science & Technology

    1990-07-01

    34 ACM Computing Surveys. 6(1): 1- 55. [Syzmanski85] Syzmanski, T. G. and C. J. V. Wyk. (1985). " GOALIE : A Space Efficient System for VLSI Artwork...this. Essentially we initialize a stack with the root. We then pull an element of this stack and if it is a cell we run the occlusion operation on the

  3. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    desirable that the code can be run on a Windows operating system on the laptop, desktop, or workstation. The focus on Windows machines allows for...transition to such systems as operated on the Navy-Marine Corp Internet (NMCI). For each code the initial cost and yearly maintenance are identified...suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports

  4. The structure of a market containing boundedly rational firms

    NASA Astrophysics Data System (ADS)

    Ibrahim, Adyda; Zura, Nerda; Saaban, Azizan

    2017-11-01

    The structure of a market is determined by the number of active firms in it. Over time, this number is affected by the exit of existing firms, called incumbents, and entries of new firms, called entrant. In this paper, we considered a market governed by the Cobb-Douglas utility function such that the demand function is isoelastic. Each firm is assumed to produce a single homogenous product under a constant unit cost. Furthermore, firms are assumed to be boundedly rational in adjusting their outputs at each period. A firm is considered to exit the market if its output is negative. In this paper, the market is assumed to have zero barrier-to-entry. Therefore, the exiting firm can reenter the market if its output is positive again, and new firms can enter the market easily. Based on these assumptions and rules, a mathematical model was developed and numerical simulations were run using Matlab. By setting certain values for the parameters in the model, initial numerical simulations showed that in the long run, the number of firms that manages to survive the market varies between zero to 30. This initial result is consistent with the idea that a zero barrier-to-entry may produce a perfectly competitive market.

  5. Challenges in the development of very high resolution Earth System Models for climate science

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun

    2017-04-01

    The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.

  6. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.

  7. Upscaling

    NASA Astrophysics Data System (ADS)

    Vandenbulcke, Luc; Barth, Alexander

    2017-04-01

    In the present European operational oceanography context, global and basin-scale models are run daily at different Monitoring and Forecasting Centers from the Copernicus Marine component (CMEMS). Regional forecasting centers, which run outside of CMEMS, then use these forecasts as initial conditions and/or boundary conditions for high-resolution or coastal forecasts. However, these improved simulations are lost to the basin-scale models (i.e. there is no feedback). Therefore, some potential improvements inside (and even outside) the areas covered by regional models are lost, and the risk for discrepancy between basin-scale and regional model remains high. The objective of this study is to simulate two-way nesting by extracting pseudo-observations from the regional models and assimilating them in the basin-scale models. The proposed method is called "upscaling". A ensemble of 100 one-way nested NEMO models of the Mediterranean Sea (Med) (1/16°) and the North-Western Med (1/80°) is implemented to simulate the period 2014-2015. Each member has perturbed initial conditions, atmospheric forcing fields and river discharge data. The Med model uses climatological Rhone river data, while the nested model uses measured daily discharges. The error of the pseudo-observations can be estimated by analyzing the ensemble of nested models. The pseudo-observations are then assimilated in the parent model by means of an Ensemble Kalman Filter. The experiments show that the proposed method improves different processes in the Med model, such as the position of the Northern Current and its incursion (or not) on the Gulf of Lions, the cold water mass on the shelf, and the position of the Rhone river plume. Regarding areas where no operational regional models exist, (some variables of) the parent model can still be improved by relating some resolved parameters to statistical properties of a higher-resolution simulation. This is the topic of a complementary study also presented at the EGU 2017 (Barth et al).

  8. Scaled centrifugal compressor, collector and running gear program

    NASA Technical Reports Server (NTRS)

    Kenehan, J. G.

    1983-01-01

    The Scaled Centrifugal Compressor, Collector and Running gear Program was conducted in support of an overall NASA strategy to improve small-compressor performance, durability, and reliability while reducing initial and life-cycle costs. Accordingly, Garrett designed and provided a test rig, gearbox coupling, and facility collector for a new NASA facility, and provided a scaled model of an existing, high-performance impeller for evaluation scaling effects on aerodynamic performance and for obtaining other performance data. Test-rig shafting was designed to operate smoothly throughout a speed range up to 60,000 rpm. Pressurized components were designed to operate at pressures up to 300 psia and at temperatures to 1000 F. Nonrotating components were designed to provide a margin-of-safety of 0.05 or greater; rotating components, for a margin-of-safety based on allowable yield and ultimate strengths. Design activities were supported by complete design analysis, and the finished hardware was subjected to check-runs to confirm proper operation. The test rig will support a wide range of compressor tests and evaluations.

  9. Using a coupled hydro-mechanical fault model to better understand the risk of induced seismicity in deep geothermal projects

    NASA Astrophysics Data System (ADS)

    Abe, Steffen; Krieger, Lars; Deckert, Hagen

    2017-04-01

    The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.

  10. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  11. flexCloud: Deployment of the FLEXPART Atmospheric Transport Model as a Cloud SaaS Environment

    NASA Astrophysics Data System (ADS)

    Morton, Don; Arnold, Dèlia

    2014-05-01

    FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. We have used it to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides. Additionally, FLEXPART may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced user. Our interest is in moving scientific modeling and simulation activities from site-specific clusters and supercomputers to a cloud model as a service paradigm. Choosing FLEXPART for our prototyping, our vision is to construct customised IaaS images containing fully-compiled and configured FLEXPART codes, including pre-processing, execution and postprocessing components. In addition, with the inclusion of a small web server in the image, we introduce a web-accessible graphical user interface that drives the system. A further initiative being pursued is the deployment of multiple, simultaneous FLEXPART ensembles in the cloud. A single front-end web interface is used to define the ensemble members, and separate cloud instances are launched, on-demand, to run the individual models and to conglomerate the outputs into a unified display. The outcome of this work is a Software as a Service (Saas) deployment whereby the details of the underlying modeling systems are hidden, allowing modelers to perform their science activities without the burden of considering implementation details.

  12. Agglomeration of dust in convective clouds initialized by nuclear bursts

    NASA Astrophysics Data System (ADS)

    Bacon, D. P.; Sarma, R. A.

    Convective clouds initialized by nuclear bursts are modeled using a two-dimensional axisymmetric cloud model. Dust transport through the atmosphere is studied using five different sizes ranging from 1 to 10,000 μm in diameter. Dust is transported in the model domain by advection and sedimentation. Water is allowed to condense onto dust particles in regions of supersaturation in the cloud. The agglomeration of dust particles resulting from the collision of different size dust particles is modeled. The evolution of the dust mass spectrum due to agglomeration is modeled using a numerical scheme which is mass conserving and has low implicit diffusion. Agglomeration moves mass from the small particles with very small fall velocity to the larger sizes which fall to the ground more readily. Results indicate that the dust fallout can be increased significantly due to this process. In preliminary runs using stable and unstable environmental soundings, at 30 min after detonation the total dust in the domain was 11 and 30%, respectively, less than a control case without agglomeration.

  13. Secular trends and climate drift in coupled ocean-atmosphere general circulation models

    NASA Astrophysics Data System (ADS)

    Covey, Curt; Gleckler, Peter J.; Phillips, Thomas J.; Bader, David C.

    2006-02-01

    Coupled ocean-atmosphere general circulation models (coupled GCMs) with interactive sea ice are the primary tool for investigating possible future global warming and numerous other issues in climate science. A long-standing problem with such models is that when different components of the physical climate system are linked together, the simulated climate can drift away from observation unless constrained by ad hoc adjustments to interface fluxes. However, 11 modern coupled GCMs, including three that do not employ flux adjustments, behave much better in this respect than the older generation of models. Surface temperature trends in control run simulations (with external climate forcing such as solar brightness and atmospheric carbon dioxide held constant) are small compared with observed trends, which include 20th century climate change due to both anthropogenic and natural factors. Sea ice changes in the models are dominated by interannual variations. Deep ocean temperature and salinity trends are small enough for model control runs to extend over 1000 simulated years or more, but trends in some regions, most notably the Arctic, differ substantially among the models and may be problematic. Methods used to initialize coupled GCMs can mitigate climate drift but cannot eliminate it. Lengthy "spin-ups" of models, made possible by increasing computer power, are one reason for the improvements this paper documents.

  14. The Scylla Multi-Code Comparison Project

    NASA Astrophysics Data System (ADS)

    Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team

    2016-01-01

    Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.

  15. Investigating the Potential Impact of the Surface Water and Ocean Topography (SWOT) Altimeter on Ocean Mesoscale Prediction

    NASA Astrophysics Data System (ADS)

    Carrier, M.; Ngodock, H.; Smith, S. R.; Souopgui, I.

    2016-02-01

    NASA's Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide sea surface height anomaly (SSHA) observations with a wider swath width and higher spatial resolution than current satellite altimeters. It is expected that this will help to further constrain ocean models in terms of the mesoscale circulation. In this work, this expectation is investigated by way of twin data assimilation experiments using the Navy Coastal Ocean Model Four Dimensional Variational (NCOM-4DVAR) data assimilation system using a weak constraint formulation. Here, a nature run is created from which SWOT observations are sampled, as well as along-track SSHA observations from simulated Jason-2 tracks. The simulated SWOT data has appropriate spatial coverage, resolution, and noise characteristics based on an observation-simulator program provided by the SWOT science team. The experiment is run for a three-month period during which the analysis is updated every 24 hours and each analysis is used to initialize a 96 hour forecast. The forecasts in each experiment are compared to the available nature run to determine the impact of the assimilated data. It is demonstrated here that the SWOT observations help to constrain the model mesoscale in a more consistent manner than traditional altimeter observations. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model analysis and forecast of mesoscale features and surface ocean transport.

  16. Adjustments with running speed reveal neuromuscular adaptations during landing associated with high mileage running training.

    PubMed

    Verheul, Jasper; Clansey, Adam C; Lake, Mark J

    2017-03-01

    It remains to be determined whether running training influences the amplitude of lower limb muscle activations before and during the first half of stance and whether such changes are associated with joint stiffness regulation and usage of stored energy from tendons. Therefore, the aim of this study was to investigate neuromuscular and movement adaptations before and during landing in response to running training across a range of speeds. Two groups of high mileage (HM; >45 km/wk, n = 13) and low mileage (LM; <15 km/wk, n = 13) runners ran at four speeds (2.5-5.5 m/s) while lower limb mechanics and electromyography of the thigh muscles were collected. There were few differences in prelanding activation levels, but HM runners displayed lower activations of the rectus femoris, vastus medialis, and semitendinosus muscles postlanding, and these differences increased with running speed. HM runners also demonstrated higher initial knee stiffness during the impact phase compared with LM runners, which was associated with an earlier peak knee flexion velocity, and both were relatively unchanged by running speed. In contrast, LM runners had higher knee stiffness during the slightly later weight acceptance phase and the disparity was amplified with increases in speed. It was concluded that initial knee joint stiffness might predominantly be governed by tendon stiffness rather than muscular activations before landing. Estimated elastic work about the ankle was found to be higher in the HM runners, which might play a role in reducing weight acceptance phase muscle activation levels and improve muscle activation efficiency with running training. NEW & NOTEWORTHY Although neuromuscular factors play a key role during running, the influence of high mileage training on neuromuscular function has been poorly studied, especially in relation to running speed. This study is the first to demonstrate changes in neuromuscular conditioning with high mileage training, mainly characterized by lower thigh muscle activation after touch down, higher initial knee stiffness, and greater estimates of energy return, with adaptations being increasingly evident at faster running speeds. Copyright © 2017 the American Physiological Society.

  17. Improved version of the PHOBOS Glauber Monte Carlo

    DOE PAGES

    Loizides, C.; Nagle, J.; Steinberg, P.

    2015-09-01

    “Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more » Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less

  18. Hydrocarbon polymeric binder for advanced solid propellant

    NASA Technical Reports Server (NTRS)

    Potts, J. E. (Editor)

    1972-01-01

    A series of DEAB initiated isoprene polymerizations were run in the 5-gallon stirred autoclave reactor. Polymerization run parameters such as initiator concentration and feed rate were correlated with the molecular weight to provide a basis for molecular weight control in future runs. Synthetic methods were developed for the preparation of n-1,3-alkadienes. By these methods, 1,3-nonadiene was polymerized using DEAB initiator to give an ester-telechelic polynonadiene. This was subsequently hydrogenated with copper chromite catalyst to give a hydroxyl terminated saturated liquid hydrocarbon prepolymer having greatly improved viscosity characteristics and a Tg 18 degrees lower than that of the hydrogenated polyisoprenes. The hydroxyl-telechelic saturated polymers prepared by the hydrogenolysis of ester-telechelic polyisoprene were reached with diisocyanates under conditions favoring linear chain extension gel permeation chromatography was used to monitor this condensation polymerization. Fractions having molecular weights above one million were produced.

  19. Should tsunami models use a nonzero initial condition for horizontal velocity?

    NASA Astrophysics Data System (ADS)

    Nava, G.; Lotto, G. C.; Dunham, E. M.

    2017-12-01

    Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require two initial conditions: one on sea surface height and another on depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). We run several full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor, using both idealized structures and a more realistic Tohoku structure. Substantial horizontal momentum is imparted to the ocean, but almost all momentum is carried away in the form of ocean acoustic waves. We compare tsunami propagation in each full-physics simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial conditions. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves (from ocean acoustic and seismic waves) at some final time, and backpropagating the tsunami waves to their initial state by solving the adjoint problem. The resulting initial conditions have negligible horizontal velocity.

  20. Reduced physical activity and risk of chronic disease: the biology behind the consequences.

    PubMed

    Booth, Frank W; Laye, Matthew J; Lees, Simon J; Rector, R Scott; Thyfault, John P

    2008-03-01

    This review focuses on three preserved, ancient, biological mechanisms (physical activity, insulin sensitivity, and fat storage). Genes in humans and rodents were selected in an environment of high physical activity that favored an optimization of aerobic metabolic pathways to conserve energy for a potential, future food deficiency. Today machines and other technologies have replaced much of the physical activity that selected optimal gene expression for energy metabolism. Distressingly, the negative by-product of a lack of ancient physical activity levels in our modern civilization is an increased risk of chronic disease. We have been employing a rodent wheel-lock model to approximate the reduction in physical activity in humans from the level under which genes were selected to a lower level observed in modern daily functioning. Thus far, two major changes have been identified when rats undertaking daily, natural voluntary running on wheels experience an abrupt cessation of the running (wheel lock model). First, insulin sensitivity in the epitrochlearis muscle of rats falls to sedentary values after 2 days of the cessation of running, confirming the decline to sedentary values in whole-body insulin sensitivity when physically active humans stop high levels of daily exercise. Second, visceral fat increases within 1 week after rats cease daily running, confirming the plasticity of human visceral fat. This review focuses on the supporting data for the aforementioned two outcomes. Our primary goal is to better understand how a physically inactive lifestyle initiates maladaptations that cause chronic disease.

  1. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  2. Different types of drifts in two seasonal forecast systems and their dependence on ENSO

    NASA Astrophysics Data System (ADS)

    Hermanson, L.; Ren, H.-L.; Vellinga, M.; Dunstone, N. D.; Hyder, P.; Ineson, S.; Scaife, A. A.; Smith, D. M.; Thompson, V.; Tian, B.; Williams, K. D.

    2017-11-01

    Seasonal forecasts using coupled ocean-atmosphere climate models are increasingly employed to provide regional climate predictions. For the quality of forecasts to improve, regional biases in climate models must be diagnosed and reduced. The evolution of biases as initialized forecasts drift away from the observations is poorly understood, making it difficult to diagnose the causes of climate model biases. This study uses two seasonal forecast systems to examine drifts in sea surface temperature (SST) and precipitation, and compares them to the long-term bias in the free-running version of each model. Drifts are considered from daily to multi-annual time scales. We define three types of drift according to their relation with the long-term bias in the free-running model: asymptoting, overshooting and inverse drift. We find that precipitation almost always has an asymptoting drift. SST drifts on the other hand, vary between forecasting systems, where one often overshoots and the other often has an inverse drift. We find that some drifts evolve too slowly to have an impact on seasonal forecasts, even though they are important for climate projections. The bias found over the first few days can be very different from that in the free-running model, so although daily weather predictions can sometimes provide useful information on the causes of climate biases, this is not always the case. We also find that the magnitude of equatorial SST drifts, both in the Pacific and other ocean basins, depends on the El Niño Southern Oscillation (ENSO) phase. Averaging over all hindcast years can therefore hide the details of ENSO state dependent drifts and obscure the underlying physical causes. Our results highlight the need to consider biases across a range of timescales in order to understand their causes and develop improved climate models.

  3. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  4. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.

  5. Impact of improved soil climatology and intialization on WRF-chem dust simulations over West Asia

    NASA Astrophysics Data System (ADS)

    Omid Nabavi, Seyed; Haimberger, Leopold; Samimi, Cyrus

    2016-04-01

    Meteorological forecast models such as WRF-chem are designed to forecast not only standard atmospheric parameters but also aerosol, particularly mineral dust concentrations. It has therefore become an important tool for the prediction of dust storms in West Asia where dust storms have the considerable impact on living conditions. However, verification of forecasts against satellite data indicates only moderate skill in prediction of such events. Earlier studies have already indicated that the erosion factor, land use classification, soil moisture, and temperature initializations play a critical role in the accuracy of WRF-chem dust simulations. In the standard setting the erosion factor and land use classification are based on topographic variations and post-processed images of the advanced very high-resolution radiometer (AVHRR) during the period April 1992-March 1993. Furthermore, WRF-chem is normally initialized by the soil moisture and temperature of Final Analysis (FNL) model on 1.0x1.0 degree grids. In this study, we have changed boundary initial conditions so that they better represent current changing environmental conditions. To do so, land use (only bare soil class) and the erosion factor were both modified using information from MODIS deep blue AOD (Aerosol Optical Depth). In this method, bare soils are where the relative frequency of dust occurrence (deep blue AOD > 0.5) is more than one-third of a given month. Subsequently, the erosion factor, limited within the bare soil class, is determined by the monthly frequency of dust occurrence ranging from 0.3 to 1. It is worth to mention, that 50 percent of calculated erosion factor is afterward assigned to sand class while silt and clay classes each gain 25 percent of it. Soil moisture and temperature from the Global Land Data Assimilation System (GLDAS) were utilized to provide these initializations in higher resolution of 0.25 degree than in the standard setting. Modified and control simulations were conducted for the summertime of 2008-2012 and verified by satellite data (MODIS deep blue AOD, TOMs Aerosol Index and MISR AOD 550nm) and two well-known modeling systems of atmospheric composition (MACC and DREAM). All comparisons show a significant improvement in WRF-chem dust simulations after implementing the modifications. In comparison to the control run, the modified run bears an average increase of spearman correlation of 17-20 percent points when it is compared with satellite data. Our runs with modified WRF-chem even outperform MACC and DREAM dust simulations for the region.

  6. Predictability and Coupled Dynamics of MJO During DYNAMO

    DTIC Science & Technology

    2015-02-03

    with two complementary atmosphere-only simulations with modified SST conditions. One WRF simulation is forced with the persistent initial SST, lacking...we have contributed to the following subset of accomplishments of the muhi-institutional team: a. Run SC0AR2 ( WRF -ROMS) in downscaling mode for the 2...Regional (SCOAR) Model Seo et al. (2007; 2014, J. Climate), http://scoar.wlklspaces.cotn p^ WRF /RSM C^ ROMS {j^TWo-way coupling ^ One

  7. The Deterministic Mine Burial Prediction System

    DTIC Science & Technology

    2009-01-12

    or below the water-line, initial linear and angular velocities, and fall angle relative to the mine’s axis of symmetry. Other input data needed...c. Run_DMBP.m: start-up MATLAB script for the program 2. C:\\DMBP\\DMBP_src: This directory contains source code, geotechnical databases, and...approved for public release). b. \\Impact_35: The IMPACT35 model c. \\MakeTPARfiles: scripts for creating wave height and wave period input data from

  8. Patellofemoral joint stress during running with alterations in foot strike pattern.

    PubMed

    Vannatta, Charles Nathan; Kernozek, Thomas W

    2015-05-01

    This study aimed to quantify differences in patellofemoral joint stress that may occur when healthy runners alter their foot strike pattern from their habitual rearfoot strike to a forefoot strike to gain insight on the potential etiology and treatment methods of patellofemoral pain. Sixteen healthy female runners completed 20 running trials in a controlled laboratory setting under rearfoot strike and forefoot strike conditions. Kinetic and kinematic data were used to drive a static optimization technique to estimate individual muscle forces to input into a model of the patellofemoral joint to estimate joint stress during running. Peak patellofemoral joint stress and the stress-time integral over stance phase decreased by 27% and 12%, respectively, in the forefoot strike condition (P < 0.001). Peak vertical ground reaction force increased slightly in the forefoot strike condition (P < 0.001). Peak quadriceps force and average hamstring force decreased, whereas gastrocnemius and soleus muscle forces increased when running with a forefoot strike (P < 0.05). Knee flexion angle at initial contact increased (P < 0.001), total knee excursion decreased (P < 0.001), and no change occurred in peak knee flexion angle (P = 0.238). Step length did not change between conditions (P = 0.375), but the leading leg landed with the foot positioned with a horizontal distance closer to the hip at initial contact in the forefoot strike condition (P < 0.001). Altering one's strike pattern to a forefoot strike results in consistent reductions in patellofemoral joint stress independent of changes in step length. Thus, implementation of forefoot strike training programs may be warranted in the treatment of runners with patellofemoral pain. However, it is suggested that the transition to a forefoot strike pattern should be completed in a graduated manner.

  9. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  10. Does the Wage Gap between Private and Public Sectors Encourage Political Corruption?

    PubMed Central

    Podobnik, Boris; Vukovic, Vuk; Stanley, H. Eugene

    2015-01-01

    We present a dynamic network model of corrupt and noncorrupt employees representing two states in the public and private sector. Corrupt employees are more connected to one another and are less willing to change their attitudes regarding corruption than noncorrupt employees. This behavior enables them to prevail and become the majority in the workforce through a first-order phase transition even though they initially represented a minority. In the model, democracy—understood as the principle of majority rule—does not create corruption, but it serves as a mechanism that preserves corruption in the long run. The motivation for our network model is a paradox that exists on the labor market. Although economic theory indicates that higher risk investments should lead to larger rewards, in many developed and developing countries workers in lower-risk public sector jobs are paid more than workers in higher-risk private sector jobs. To determine the long-run sustainability of this economic paradox, we study data from 28 EU countries and find that the public sector wage premium increases with the level of corruption. PMID:26495847

  11. Does the Wage Gap between Private and Public Sectors Encourage Political Corruption?

    PubMed

    Podobnik, Boris; Vukovic, Vuk; Stanley, H Eugene

    2015-01-01

    We present a dynamic network model of corrupt and noncorrupt employees representing two states in the public and private sector. Corrupt employees are more connected to one another and are less willing to change their attitudes regarding corruption than noncorrupt employees. This behavior enables them to prevail and become the majority in the workforce through a first-order phase transition even though they initially represented a minority. In the model, democracy-understood as the principle of majority rule-does not create corruption, but it serves as a mechanism that preserves corruption in the long run. The motivation for our network model is a paradox that exists on the labor market. Although economic theory indicates that higher risk investments should lead to larger rewards, in many developed and developing countries workers in lower-risk public sector jobs are paid more than workers in higher-risk private sector jobs. To determine the long-run sustainability of this economic paradox, we study data from 28 EU countries and find that the public sector wage premium increases with the level of corruption.

  12. STOCHASTICITY AND EFFICIENCY IN SIMPLIFIED MODELS OF CORE-COLLAPSE SUPERNOVA EXPLOSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardall, Christian Y.; Budiardja, Reuben D., E-mail: cardallcy@ornl.gov, E-mail: reubendb@utk.edu

    2015-11-01

    We present an initial report on 160 simulations of a highly simplified model of the post-bounce core-collapse supernova environment in three spatial dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a “critical neutrino luminosity” for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of ourmore » SASI-dominated runs are much more stochastic: a sharp threshold critical luminosity is “smeared out” into a rising probability of explosion over a ∼20% range of luminosity. We also find that the SASI-dominated models are able to explode with 3–4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less

  13. A satellite observation test bed for cloud parameterization development

    NASA Astrophysics Data System (ADS)

    Lebsock, M. D.; Suselj, K.

    2015-12-01

    We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.

  14. The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy

    PubMed Central

    Berry, Nathaniel T.; Wideman, Laurie; Shields, Edgar W.; Battaglini, Claudio L.

    2016-01-01

    Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU) Duathlon simulation on maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max) to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max), maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01). This decrease was coupled with a significant reduction in running speed at VT (p = 0.015). A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001). The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds. Key points Decrease in relative oxygen uptake at VT (ml·kg-1·min-1) during the final leg of a duathlon simulation, compared to a single-bout maximal run. We observed a decrease in running speed at VT during the final leg of a duathlon simulation; resulting in an increase of more than 2 minutes to complete a 5 km run. During our study, highly trained athletes were unable to complete the final 5 km run at the same intensity that they completed the initial 10 km run (in a laboratory setting). A better understanding, and determination, of training loads during multisport training may help to better periodize training programs; additional research is required. PMID:27274661

  15. Relaxation processes in a low-order three-dimensional magnetohydrodynamics model

    NASA Technical Reports Server (NTRS)

    Stribling, Troy; Matthaeus, William H.

    1991-01-01

    The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.

  16. Comparison of Four Precipitation Forcing Datasets in Land Information System Simulations over the Continental U.S.

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Kuligowski, Robert J.; Langston, Carrie

    2013-01-01

    The NASA Short ]term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real ]time configuration of the NASA Land Information System (LIS) with the Noah land surface model (LSM). Output from the SPoRT ]LIS run is used to initialize land surface variables for local modeling applications at select National Weather Service (NWS) partner offices, and can be displayed in decision support systems for situational awareness and drought monitoring. The SPoRT ]LIS is run over a domain covering the southern and eastern United States, fully nested within the National Centers for Environmental Prediction Stage IV precipitation analysis grid, which provides precipitation forcing to the offline LIS ]Noah runs. The SPoRT Center seeks to expand the real ]time LIS domain to the entire Continental U.S. (CONUS); however, geographical limitations with the Stage IV analysis product have inhibited this expansion. Therefore, a goal of this study is to test alternative precipitation forcing datasets that can enable the LIS expansion by improving upon the current geographical limitations of the Stage IV product. The four precipitation forcing datasets that are inter ]compared on a 4 ]km resolution CONUS domain include the Stage IV, an experimental GOES quantitative precipitation estimate (QPE) from NESDIS/STAR, the National Mosaic and QPE (NMQ) product from the National Severe Storms Laboratory, and the North American Land Data Assimilation System phase 2 (NLDAS ]2) analyses. The NLDAS ]2 dataset is used as the control run, with each of the other three datasets considered experimental runs compared against the control. The regional strengths, weaknesses, and biases of each precipitation analysis are identified relative to the NLDAS ]2 control in terms of accumulated precipitation pattern and amount, and the impacts on the subsequent LSM spin ]up simulations. The ultimate goal is to identify an alternative precipitation forcing dataset that can best support an expansion of the real ]time SPoRT ]LIS to a domain covering the entire CONUS.

  17. A comparison of the spatiotemporal parameters, kinematics, and biomechanics between shod, unshod, and minimally supported running as compared to walking.

    PubMed

    Lohman, Everett B; Balan Sackiriyas, Kanikkai Steni; Swen, R Wesley

    2011-11-01

    Recreational running has many proven benefits which include increased cardiovascular, physical and mental health. It is no surprise that Running USA reported over 10 million individuals completed running road races in 2009 not to mention recreational joggers who do not wish to compete in organized events. Unfortunately there are numerous risks associated with running, the most common being musculoskeletal injuries attributed to incorrect shoe choice, training errors and excessive shoe wear or other biomechanical factors associated with ground reaction forces. Approximately 65% of chronic injuries in distance runners are related to routine high mileage, rapid increases in mileage, increased intensity, hills or irregular surface running, and surface firmness. Humans have been running barefooted or wearing minimally supportive footwear such as moccasins or sandals since the beginning of time while modernized running shoes were not invented until the 1970s. However, the current trend is that many runners are moving back to barefoot running or running in "minimal" shoes. The goal of this masterclass article is to examine the similarities and differences between shod and unshod (barefoot or minimally supportive running shoes) runners by examining spatiotemporal parameters, energetics, and biomechanics. These running parameters will be compared and contrasted with walking. The most obvious difference between the walking and running gait cycle is the elimination of the double limb support phase of walking gait in exchange for a float (no limb support) phase. The biggest difference between barefoot and shod runners is at the initial contact phase of gait where the barefoot and minimally supported runner initiates contact with their forefoot or midfoot instead of the rearfoot. As movement science experts, physical therapists are often called upon to assess the gait of a running athlete, their choice of footwear, and training regime. With a clearer understanding of running and its complexities, the physical therapist will be able to better identify faults and create informed treatment plans while rehabilitating patients who are experiencing musculoskeletal injuries due to running. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. The effects of altering initial ground contact in the running gait of an individual with transtibial amputation.

    PubMed

    Waetjen, Linda; Parker, Matthew; Wilken, Jason M

    2012-09-01

    High rates of osteoarthritis of the knee joint of the intact limb in persons with amputation have raised concern about the long-term consequence of running. The purpose of this intervention was to determine if loading of the knee on the intact limb of a person with transtibial amputation during running could be decreased by changing the intact limb initial ground contact from rear foot to forefoot strike. This study compared kinematic, kinetic and temporal-spatial data collected while a 27-year-old male, who sustained a traumatic unilateral transtibial amputation of the left lower extremity, ran using a forefoot ground contact and again while using a heel first ground contact. Changing initial ground contact from rear foot strike to forefoot strike resulted in decreases in vertical ground reaction forces at impact, peak knee moments in stance, peak knee powers, and improved symmetry in step length. This case suggests forefoot initial contact of the intact limb may minimize loading of the knee on the intact limb in individuals with transtibial amputation.

  19. Migration of Dust Particles from Comet 2P Encke

    NASA Technical Reports Server (NTRS)

    Ipatov, S. I.

    2003-01-01

    We investigated the migration of dust particles under the gravitational influence of all planets (except for Pluto), radiation pressure, Poynting-Robertson drag and solar wind drag for Beta equal to 0.002, 0.004, 0.01, 0.05, 0.1, 0.2, and 0.4. For silicate particles such values of Beta correspond to diameters equal to about 200, 100, 40, 9, 4, 2, and 1 microns, respectively. We used the Bulirsh-Stoer method of integration, and the relative error per integration step was taken to be less than lo-'. Initial orbits of the particles were close to the orbit of Comet 2P Encke. We considered initial particles near perihelion (runs denoted as Delta tsub o, = 0), near aphelion (Delta tsub o, = 0.5), and also studied their initial positions when the comet moved for Pa/4 after perihelion passage (such runs are denoted as Delta tsub o, =i 0.25), where Pa is the period of the comet. Variations in time T when perihelion was passed was varied with a step 0.1 day for series 'S' and with a step 1 day for series 'L'. For each Beta we considered N = 101 particles for "S" runs and 150 particles for "L" runs.

  20. A model for the prediction of latent errors using data obtained during the development process

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Martello, S. J.

    1984-01-01

    A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).

  1. Exploring the speed and performance of molecular replacement with AMPLE using QUARK ab initio protein models

    PubMed Central

    Keegan, Ronan M.; Bibby, Jaclyn; Thomas, Jens; Xu, Dong; Zhang, Yang; Mayans, Olga; Winn, Martyn D.; Rigden, Daniel J.

    2015-01-01

    AMPLE clusters and truncates ab initio protein structure predictions, producing search models for molecular replacement. Here, an interesting degree of complementarity is shown between targets solved using the different ab initio modelling programs QUARK and ROSETTA. Search models derived from either program collectively solve almost all of the all-helical targets in the test set. Initial solutions produced by Phaser after only 5 min perform surprisingly well, improving the prospects for in situ structure solution by AMPLE during synchrotron visits. Taken together, the results show the potential for AMPLE to run more quickly and successfully solve more targets than previously suspected. PMID:25664744

  2. Ares I-X Upper Stage Simulator Structural Analyses Supporting the NESC Critical Initial Flaw Size Assessment

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2008-01-01

    The structural analyses described in the present report were performed in support of the NASA Engineering and Safety Center (NESC) Critical Initial Flaw Size (CIFS) assessment for the ARES I-X Upper Stage Simulator (USS) common shell segment. The structural analysis effort for the NESC assessment had three thrusts: shell buckling analyses, detailed stress analyses of the single-bolt joint test; and stress analyses of two-segment 10 degree-wedge models for the peak axial tensile running load. Elasto-plastic, large-deformation simulations were performed. Stress analysis results indicated that the stress levels were well below the material yield stress for the bounding axial tensile design load. This report also summarizes the analyses and results from parametric studies on modeling the shell-to-gusset weld, flange-surface mismatch, bolt preload, and washer-bearing-surface modeling. These analyses models were used to generate the stress levels specified for the fatigue crack growth assessment using the design load with a factor of safety.

  3. Steam-load-forecasting technique for central-heating plants. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, M.C.; Carnahan, J.V.

    Because boilers generally are most efficient at full loads, the Army could achieve significant savings by running fewer boilers at high loads rather than more boilers at low loads. A reliable load prediction technique could help ensure that only those boilers required to meet demand are on line. This report presents the results of an investigation into the feasibility of forecasting heat plant steam loads from historical patterns and weather information. Using steam flow data collected at Fort Benjamin Harrison, IN, a Box-Jenkins transfer function model with an acceptably small prediction error was initially identified. Initial investigation of forecast modelmore » development appeared successful. Dynamic regression methods using actual ambient temperatures yielded the best results. Box-Jenkins univariate models' results appeared slightly less accurate. Since temperature information was not needed for model building and forecasting, however, it is recommended that Box-Jenkins models be considered prime candidates for load forecasting due to their simpler mathematics.« less

  4. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.

  5. The impact of dynamic data assimilation on the numerical simulations of the QE II cyclone and an analysis of the jet streak influencing the precyclogenetic environment

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Uccellini, Louis W.; Brill, Keith F.; Kuo, Ying-Hwa

    1992-01-01

    A mesoscale numerical model is combined with a dynamic data assimilation via Newtonian relaxation, or 'nudging', to provide initial conditions for subsequent simulations of the QE II cyclone. Both the nudging technique and the inclusion of supplementary data are shown to have a large positive impact on the simulation of the QE II cyclone during the initial phase of rapid cyclone development. Within the initial development period (from 1200 to 1800 UTC 9 September 1978), the dynamic assimilation of operational and bogus data yields a coherent two-layer divergence pattern that is not well defined in the model run using only the operational data and static initialization. Diagnostic analysis based on the simulations show that the initial development of the QE II storm between 0000 UTC 9 September and 0000 UTC 10 September was embedded within an indirect circulation of an intense 300-hPa jet streak, was related to baroclinic processes extending throughout a deep portion of the troposphere, and was associated with a classic two-layer mass-divergence profile expected for an extratropical cyclone.

  6. Intercomparison of Streamflow Simulations between WRF-Hydro and Hydrology Laboratory-Research Distributed Hydrologic Model Frameworks

    NASA Astrophysics Data System (ADS)

    KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.

    2017-12-01

    The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.

  7. Back to first principles: a new model for the regulation of drug promotion

    PubMed Central

    Bennett, Alan; Jiménez, Freddy; Fields, Larry Eugene; Oyster, Joshua

    2015-01-01

    The US Food and Drug Administration's (‘FDA’ or the ‘Agency’) current regulatory framework for drug promotion, by significantly restricting the ability of drug manufacturers to communicate important, accurate, up-to-date scientific information about their products that is truthful and non-misleading, runs afoul of the First Amendment and actually runs counter to the Agency's public health mission. Our article proposes a New Model that represents an initial proposal for a modern, sustainable regulatory framework that comprehensively addresses drug promotion while protecting the public health, protecting manufacturers’ First Amendment rights, establishing clear and understandable rules, and maintaining the integrity of the FDA approval process. The New Model would create three categories of manufacturer communications—(1) Scientific Exchange and Other Exempt Communications, (2) Non-Core Communications, and (3) Core Communications—that would be regulated consistent with the First Amendment and according to the strength of the government's interest in regulating the specific communications included within each category. The New Model should address the FDA's concerns related to off-label speech while protecting drug manufacturers’ freedom to engage in truthful and non-misleading communications about their products. PMID:27774195

  8. Expansion of the Real-time Sport-land Information System for NOAA/National Weather Service Situational Awareness and Local Modeling Applications

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.

    2014-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has been running a real-time version of the Land Information System (LIS) since summer 2010 (hereafter, SPoRTLIS). The real-time SPoRT-LIS runs the Noah land surface model (LSM) in an offline capacity apart from a numerical weather prediction model, using input atmospheric and precipitation analyses (i.e., "forcings") to drive the Noah LSM integration at 3-km resolution. Its objectives are to (1) produce local-scale information about the soil state for NOAA/National Weather Service (NWS) situational awareness applications such as drought monitoring and assessing flood potential, and (2) provide land surface initialization fields for local modeling initiatives. The current domain extent has been limited by the input atmospheric analyses that drive the Noah LSM integration within SPoRT-LIS, specifically the National Centers for Environmental Prediction (NCEP) Stage IV precipitation analyses. Due to the nature of the geographical edges of the Stage IV precipitation grid and its limitations in the western U.S., the SPoRT-LIS was originally confined to a domain fully nested within the Stage IV grid, over the southeastern half of the Conterminous United States (CONUS). In order to expand the real-time SPoRT-LIS to a full CONUS domain, alternative precipitation forcing datasets were explored in year-long, offline comparison runs of the Noah LSM. Based on results of these comparison simulations, we chose to implement the radar/gauge-based precipitation analyses from the National Severe Storms Laboratory as a replacement to the Stage IV product. The Multi-Radar Multi-Sensor (MRMS; formerly known as the National Mosaic and multi-sensor Quantitative precipitation estimate) product has full CONUS coverage at higher-resolution, thereby providing better coverage and greater detail than that of the Stage IV product. This paper will describe the expanded/upgraded SPoRT-LIS, present comparisons between the original and upgraded SPoRT-LIS, and discuss the path forward for future collaboration opportunities with SPoRT partners in the NWS.

  9. Numerical and Experimental Study on the Effect of Coral Reef and Beach Vegetation on Reduction of Long Wave Run-Up

    NASA Astrophysics Data System (ADS)

    Mohandie, R. K.; Teng, M. H.

    2009-12-01

    Numerical and experimental studies were carried out to examine the mitigating capabilities of coral reefs and vegetations on tsunami and storm surge inundation. For long waves propagating over variable depth such as that over a reef, the nonlinear and dispersive Boussinesq equations were applied. For run-up onto dry land where the nonlinear effect dominates, the nonlinear and nondispersive shallow water equations were used. Long waves with various amplitudes and wavelengths propagating over coral reefs of different length and height were investigated to quantify under which conditions a coral reef may be effective in reducing the wave impact. It was observed that a reef can make a long wave separate into several smaller waves and it can also cause wave breaking resulting in energy dissipation. Our data suggest that both wave separation and breaking induced by coral reefs are effective at mitigating long wave run-up, with the latter being noticeably more effective than the former. As expected, it was observed that the higher the coral reef height, the more the reduction in wave run-up especially when the reef height is greater than 50% of the water depth. For reefs to be effective as a barrier for long waves such as tsunamis and storm surges, it was found that the reefs must be sufficiently long in the wave propagation direction, for example, with its length to be at least of the same magnitude as the wavelength or longer. In this study, it was shown that an effective reef can reduce the long wave run-up by as much as 25% and 50% by wave separation and wave breaking, respectively. Three types of vegetation, namely, grass, shrub and coconut trees, were modeled and tested in a wave tank against various initial wave amplitude and beach slopes in the Hydraulics Lab at the University of Hawaii (UH) to examine each particular type’s effectiveness in reducing wave run-up and to determine its roughness coefficient for wave run-up through numerical simulation and experimental measurement. These roughness coefficients were shown to be higher than the traditional Manning’s coefficient values for vegetation in channel flows. Also, the coefficients were shown to be a function of the ratio of the initial wave amplitude over the vegetation height and are relatively independent of the beach slope. The vegetation spacing and tree diameters in the lab models were selected based on the typical spacing and tree diameter observed in the field through a reduced scale. All three types of vegetation were found to be effective in reducing wave run-up especially on mildly sloped beaches with a reduction rate ranging from 20% to more than 50%. A numerical simulation that incorporated the effects of coral reef and the combined vegetation types showed that on a 5 degree slope the reduction in run-up was 61% as compared to an unprotected scenario. A larger scale experimental study on coconut and bushes in the NSF-funded tsunami basin at the OSU also showed these vegetations are effective at reducing wave run-up. These results can be helpful in achieving a better understanding of the role that coral reefs and vegetation play in tsunami and storm surge mitigation.

  10. Will Arctic sea ice thickness initialization improve seasonal forecast skill?

    NASA Astrophysics Data System (ADS)

    Day, J. J.; Hawkins, E.; Tietsche, S.

    2014-11-01

    Arctic sea ice thickness is thought to be an important predictor of Arctic sea ice extent. However, coupled seasonal forecast systems do not generally use sea ice thickness observations in their initialization and are therefore missing a potentially important source of additional skill. To investigate how large this source is, a set of ensemble potential predictability experiments with a global climate model, initialized with and without knowledge of the sea ice thickness initial state, have been run. These experiments show that accurate knowledge of the sea ice thickness field is crucially important for sea ice concentration and extent forecasts up to 8 months ahead, especially in summer. Perturbing sea ice thickness also has a significant impact on the forecast error in Arctic 2 m temperature a few months ahead. These results suggest that advancing capabilities to observe and assimilate sea ice thickness into coupled forecast systems could significantly increase skill.

  11. SMOS Soil Moisture Data Assimilation in the NASA Land Information System: Impact on LSM Initialization and NWP Forecasts

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay; Case, Jonathan L.; Zavodsky, Bradley

    2015-01-01

    Land surface models are important components of numerical weather prediction (NWP) models, partitioning incoming energy into latent and sensitive heat fluxes that affect boundary layer growth and destabilization. During warm-season months, diurnal heating and convective initiation depend strongly on evapotranspiration and available boundary layer moisture, which are substantially affected by soil moisture content. Therefore, to properly simulate warm-season processes in NWP models, an accurate initialization of the land surface state is important for accurately depicting the exchange of heat and moisture between the surface and boundary layer. In this study, soil moisture retrievals from the Soil Moisture and Ocean Salinity (SMOS) satellite radiometer are assimilated into the Noah Land Surface Model via an Ensemble Kalman Filter embedded within the NASA Land Information System (LIS) software framework. The output from LIS-Noah is subsequently used to initialize runs of the Weather Research and Forecasting (WRF) NWP model. The impact of assimilating SMOS retrievals is assessed by initializing the WRF model with LIS-Noah output obtained with and without SMOS data assimilation. The southeastern United States is used as the domain for a preliminary case study. During the summer months, there is extensive irrigation in the lower Mississippi Valley for rice and other crops. The irrigation is not represented in the meteorological forcing used to drive the LIS-Noah integration, but the irrigated areas show up clearly in the SMOS soil moisture retrievals, resulting in a case with a large difference in initial soil moisture conditions. The impact of SMOS data assimilation on both Noah soil moisture fields and on short-term (0-48 hour) WRF weather forecasts will be presented.

  12. Dynamical Downscaling of Seasonal Climate Prediction over Nordeste Brazil with ECHAM3 and NCEP's Regional Spectral Models at IRI.

    NASA Astrophysics Data System (ADS)

    Nobre, Paulo; Moura, Antonio D.; Sun, Liqiang

    2001-12-01

    This study presents an evaluation of a seasonal climate forecast done with the International Research Institute for Climate Prediction (IRI) dynamical forecast system (regional model nested into a general circulation model) over northern South America for January-April 1999, encompassing the rainy season over Brazil's Nordeste. The one-way nesting is one in two tiers: first the NCEP's Regional Spectral Model (RSM) runs with an 80-km grid mesh forced by the ECHAM3 atmospheric general circulation model (AGCM) outputs; then the RSM runs with a finer grid mesh (20 km) forced by the forecasts generated by the RSM-80. An ensemble of three realizations is done. Lower boundary conditions over the oceans for both ECHAM and RSM model runs are sea surface temperature forecasts over the tropical oceans. Soil moisture is initialized by ECHAM's inputs. The rainfall forecasts generated by the regional model are compared with those of the AGCM and observations. It is shown that the regional model at 80-km resolution improves upon the AGCM rainfall forecast, reducing both seasonal bias and root-mean-square error. On the other hand, the RSM-20 forecasts presented larger errors, with spatial patterns that resemble those of local topography. The better forecast of the position and width of the intertropical convergence zone (ITCZ) over the tropical Atlantic by the RSM-80 model is one of the principal reasons for better-forecast scores of the RSM-80 relative to the AGCM. The regional model improved the spatial as well as the temporal details of rainfall distribution, and also presenting the minimum spread among the ensemble members. The statistics of synoptic-scale weather variability on seasonal timescales were best forecast with the regional 80-km model over the Nordeste. The possibility of forecasting the frequency distribution of dry and wet spells within the rainy season is encouraging.

  13. Sensitivity Study of IROE Cloud Retrievals Using VIIRS M-Bands and Combined VIIRS/CrIS IR Observations

    NASA Astrophysics Data System (ADS)

    Wang, C.; Platnick, S. E.; Meyer, K.; Ackerman, S. A.; Holz, R.; Heidinger, A.

    2017-12-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi-NPP spacecraft is considered as the next generation of instrument providing operational moderate resolution imaging capabilities after the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua. However, cloud-top property (CTP) retrieval algorithms designed for the two instruments cannot be identical because of the absence of CO2 bands on VIIRS. In this study, we conduct a comprehensive sensitivity study of cloud retrievals utilizing a IR-Optimal Estimation (IROE) based algorithm. With a fast IR radiative transfer model, the IROE simultaneously retrieves cloud-top height (CTH), cloud optical thickness (COT), cloud effective radius (CER) and corresponding uncertainties using a set of IR bands. Three retrieval runs are implemented for this sensitivity study: retrievals using 1) three native VIIRS M-Bands at 750m resolution (8.5-, 11-, and 12-μm), 2) three native VIIRS M-Bands with spectrally integrated CO2 bands from the Cross-Track Infrared Sounder (CrIS), and 3) six MODIS IR bands (8.5-, 11-, 12-, 13.3-, 13.6-, and 13.9-μm). We select a few collocated MODIS and VIIRS granules for pixel-level comparison. Furthermore, aggregated daily and monthly cloud properties from the three runs are also compared. It shows that, the combined VIIRS/CrIS run agrees well with the MODIS-only run except for pixels near cloud edges. The VIIRS-only run is close to its counterparts when clouds are optically thick. However, for optically thin clouds, the VIIRS-only run can be readily influenced by the initial guess. Large discrepancies and uncertainties can be found for optically thin clouds from the VIIRS-only run.

  14. Primordial blackholes and gravitational waves for an inflection-point model of inflation

    NASA Astrophysics Data System (ADS)

    Choudhury, Sayantan; Mazumdar, Anupam

    2014-06-01

    In this article we provide a new closed relationship between cosmic abundance of primordial gravitational waves and primordial blackholes that originated from initial inflationary perturbations for inflection-point models of inflation where inflation occurs below the Planck scale. The current Planck constraint on tensor-to-scalar ratio, running of the spectral tilt, and from the abundance of dark matter content in the universe, we can deduce a strict bound on the current abundance of primordial blackholes to be within a range, 9.99712 ×10-3 <ΩPBHh2 < 9.99736 ×10-3.

  15. Design and Development of a Model to Simulate 0-G Treadmill Running Using the European Space Agency's Subject Loading System

    NASA Technical Reports Server (NTRS)

    Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.

    2010-01-01

    Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.

  16. Reinforcement of wheel running in BALB/c mice: role of motor activity and endogenous opioids.

    PubMed

    Vargas-Pérez, Héctor; Sellings, Laurie H L; Paredes, Raúl G; Prado-Alcalá, Roberto A; Díaz, José-Luis

    2008-11-01

    The authors investigated the effect of the opioid antagonist naloxone on wheel-running behavior in Balb/c mice. Naloxone delayed the acquisition of wheel-running behavior, but did not reduce the expression of this behavior once acquired. Delayed acquisition was not likely a result of reduced locomotor activity, as naloxone-treated mice did not exhibit reduced wheel running after the behavior was acquired, and they performed normally on the rotarod test. However, naloxone-blocked conditioned place preference for a novel compartment paired previously with wheel running, suggesting that naloxone may delay wheel-running acquisition by blocking the rewarding or reinforcing effects of the behavior. These results suggest that the endogenous opioid system mediates the initial reinforcing effects of wheel running that are important in acquisition of the behavior.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  18. Impact of AIRS Thermodynamic Profile on Regional Weather Forecast

    NASA Technical Reports Server (NTRS)

    Chou, Shih-Hung; Zavodsky, Brad; Jedlovee, Gary

    2010-01-01

    Prudent assimilation of AIRS thermodynamic profiles and quality indicators can improve initial conditions for regional weather models. AIRS-enhanced analysis has warmer and moister PBL. Forecasts with AIRS profiles are generally closer to NAM analyses than CNTL. Assimilation of AIRS leads to an overall QPF improvement in 6-h accumulated precipitation forecasts. Including AIRS profiles in assimilation process enhances the moist instability and produces stronger updrafts and a better precipitation forecast than the CNTL run.

  19. Using Agent-Based Distillations to Explore Logistics Support to Urban, Humanitarian Assistance/Disaster Relief Operations

    DTIC Science & Technology

    2003-09-01

    environments is warranted. The author’s initial concept was to set up the same scenario in three different PA agent-based programs, MANA, PYTHAGORAS ...of the consolidation as the result for that particular set of runs. This technique also allowed us to invoke the Central Limit Theorem . D...capabilities in the SOCRATES modeling environment. We encourage MANA and PYTHAGORAS to add this functionality to their products as well. We

  20. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  1. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  2. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  3. Body-terrain interaction affects large bump traversal of insects and legged robots.

    PubMed

    Gart, Sean W; Li, Chen

    2018-02-02

    Small animals and robots must often rapidly traverse large bump-like obstacles when moving through complex 3D terrains, during which, in addition to leg-ground contact, their body inevitably comes into physical contact with the obstacles. However, we know little about the performance limits of large bump traversal and how body-terrain interaction affects traversal. To address these, we challenged the discoid cockroach and an open-loop six-legged robot to dynamically run into a large bump of varying height to discover the maximal traversal performance, and studied how locomotor modes and traversal performance are affected by body-terrain interaction. Remarkably, during rapid running, both the animal and the robot were capable of dynamically traversing a bump much higher than its hip height (up to 4 times the hip height for the animal and 3 times for the robot, respectively) at traversal speeds typical of running, with decreasing traversal probability with increasing bump height. A stability analysis using a novel locomotion energy landscape model explained why traversal was more likely when the animal or robot approached the bump with a low initial body yaw and a high initial body pitch, and why deflection was more likely otherwise. Inspired by these principles, we demonstrated a novel control strategy of active body pitching that increased the robot's maximal traversable bump height by 75%. Our study is a major step in establishing the framework of locomotion energy landscapes to understand locomotion in complex 3D terrains.

  4. Simulating Future Changes in Spatio-temporal Precipitation by Identifying and Characterizing Individual Rainstorm Events

    NASA Astrophysics Data System (ADS)

    Chang, W.; Stein, M.; Wang, J.; Kotamarthi, V. R.; Moyer, E. J.

    2015-12-01

    A growing body of literature suggests that human-induced climate change may cause significant changes in precipitation patterns, which could in turn influence future flood levels and frequencies and water supply and management practices. Although climate models produce full three-dimensional simulations of precipitation, analyses of model precipitation have focused either on time-averaged distributions or on individual timeseries with no spatial information. We describe here a new approach based on identifying and characterizing individual rainstorms in either data or model output. Our approach enables us to readily characterize important spatio-temporal aspects of rainstorms including initiation location, intensity (mean and patterns), spatial extent, duration, and trajectory. We apply this technique to high-resolution precipitation over the continental U.S. both from radar-based observations (NCEP Stage IV QPE product, 1-hourly, 4 km spatial resolution) and from model runs with dynamical downscaling (WRF regional climate model, 3-hourly, 12 km spatial resolution). In the model studies we investigate the changes in storm characteristics under a business-as-usual warming scenario to 2100 (RCP 8.5). We find that in these model runs, rainstorm intensity increases as expected with rising temperatures (approximately 7%/K, following increased atmospheric moisture content), while total precipitation increases by a lesser amount (3%/K), consistent with other studies. We identify for the first time the necessary compensating mechanism: in these model runs, individual precipitation events become smaller. Other aspects are approximately unchanged in the warmer climate. Because these spatio-temporal changes in rainfall patterns would impact regional hydrology, it is important that they be accurately incorporated into any impacts assessment. For this purpose we have developed a methodology for producing scenarios of future precipitation that combine observational data and model-projected changes. We statistically describe the future changes in rainstorm characteristics suggested by the WRF model and apply those changes to observational data. The resulting high spatial and temporal resolution scenarios have immediate applications for impacts assessment and adaptation studies.

  5. Modeling of Longitudinal Changes in Left Ventricular Dimensions among Female Adolescent Runners

    PubMed Central

    2015-01-01

    Purpose Left ventricular (LV) enlargement has been linked to sudden cardiac death among young athletes. This study aimed to model the effect of long-term incessant endurance training on LV dimensions in female adolescent runners. Methods Japanese female adolescent competitive distance runners (n = 36, age: 15 years, height: 158.1 ± 4.6 cm, weight: 44.7 ± 6.1 kg, percent body fat: 17.0 ± 5.2%) underwent echocardiography and underwater weighing every 6 months for 3 years. Since the measurement occasions varied across subjects, multilevel analysis was used for curvilinear modeling of changes in running performance (velocities in 1500 m and 3000 m track race), maximal oxygen uptake (VO2max), body composition, and LV dimensions. Results Initially, LV end-diastolic dimension (LVEDd) and LV mass were 47.0 ± 3.0 mm and 122.6 ± 15.7 g, respectively. Running performance and VO2max improved along with the training duration. The trends of changes in fat-free mass (FFM) and LVEDd were similarly best described by quadratic polynomials. LVEDd did not change over time in the model including FFM as a covariate. Increases in LV wall thicknesses were minimal and independent of FFM. LV mass increased according to a quadratic polynomial trend even after adjusting for FFM. Conclusions FFM was an important factor determining changes in LVEDd and LV mass. Although running performance and VO2max were improved by continued endurance training, further LV cavity enlargement hardly occurred beyond FFM gain in these adolescent female runners, who already demonstrated a large LVEDd. PMID:26469336

  6. Exclusive Preference Develops Less Readily on Concurrent Ratio Schedules with Wheel-Running than with Sucrose Reinforcement

    PubMed Central

    Belke, Terry W

    2010-01-01

    Previous research suggested that allocation of responses on concurrent schedules of wheel-running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule value on one alternative was systematically increased. In one condition, the reinforcer on both alternatives was .1 ml of 7.5% sucrose solution; in the other, it was a 30-s opportunity to run in a wheel. Results showed that the average ratio at which greater than 90% of responses were allocated to the unchanged alternative was higher with wheel-running reinforcement. As the ratio requirement was initially increased, responding strongly shifted toward the unchanged alternative with sucrose, but not with wheel running. Instead, responding initially increased on both alternatives, then subsequently shifted toward the unchanged alternative. Furthermore, changeover responses as a percentage of total responses decreased with sucrose, but not wheel-running reinforcement. Finally, for some animals, responding on the increasing ratio alternative decreased as the ratio requirement increased, but then stopped and did not decline with further increments. The implications of these results for theories of choice are discussed. PMID:21451744

  7. Exclusive preference develops less readily on concurrent ratio schedules with wheel-running than with sucrose reinforcement.

    PubMed

    Belke, Terry W

    2010-09-01

    Previous research suggested that allocation of responses on concurrent schedules of wheel-running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule value on one alternative was systematically increased. In one condition, the reinforcer on both alternatives was .1 ml of 7.5% sucrose solution; in the other, it was a 30-s opportunity to run in a wheel. Results showed that the average ratio at which greater than 90% of responses were allocated to the unchanged alternative was higher with wheel-running reinforcement. As the ratio requirement was initially increased, responding strongly shifted toward the unchanged alternative with sucrose, but not with wheel running. Instead, responding initially increased on both alternatives, then subsequently shifted toward the unchanged alternative. Furthermore, changeover responses as a percentage of total responses decreased with sucrose, but not wheel-running reinforcement. Finally, for some animals, responding on the increasing ratio alternative decreased as the ratio requirement increased, but then stopped and did not decline with further increments. The implications of these results for theories of choice are discussed.

  8. Landing Characteristics in Waves of Three Dynamic Models of Flying Boats

    NASA Technical Reports Server (NTRS)

    Benson, James M.; Havens, Robert F.; Woodward, David R.

    1947-01-01

    Powered models of three different flying boats were landed in oncoming wave of various heights and lengths. The resulting motions and acceleration were recorded to survey the effects of varying the trim at landing, the deceleration after landing, and the size of the waves. One of the models had an unusually long afterbody. The data for landing with normal rates of deceleration indicated that the most severe motions and accelerations were likely to occur at some period of the landing run subsequent to the initial impact. Landings made at abnormally low trims led to unusually severe bounces during the runout. The least severe landing occurred after a small lending when the model was rapidly decelerated at about 0.4 g in a simulation of the proposed use of braking devices. The severity of the landings increased with wave height and was at a maximum when the wave length was of the order of from one and one-half to twice the over-all length of the model. The models with afterbodies of moderate length frequently bounced clear of the water into a stalled attitude at speeds below flying speed. The model with the long afterbody had less tendency to bounce from the waves and consequently showed less severe accelerations during the landing run than the models with moderate lengths of afterbody.

  9. FireMap: A Web Tool for Dynamic Data-Driven Predictive Wildfire Modeling Powered by the WIFIRE Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Block, J.; Crawl, D.; Artes, T.; Cowart, C.; de Callafon, R.; DeFanti, T.; Graham, J.; Smarr, L.; Srivas, T.; Altintas, I.

    2016-12-01

    The NSF-funded WIFIRE project has designed a web-based wildfire modeling simulation and visualization tool called FireMap. The tool executes FARSITE to model fire propagation using dynamic weather and fire data, configuration settings provided by the user, and static topography and fuel datasets already built-in. Using GIS capabilities combined with scalable big data integration and processing, FireMap enables simple execution of the model with options for running ensembles by taking the information uncertainty into account. The results are easily viewable, sharable, repeatable, and can be animated as a time series. From these capabilities, users can model real-time fire behavior, analyze what-if scenarios, and keep a history of model runs over time for sharing with collaborators. Firemap runs FARSITE with national and local sensor networks for real-time weather data ingestion and High-Resolution Rapid Refresh (HRRR) weather for forecasted weather. The HRRR is a NOAA/NCEP operational weather prediction system comprised of a numerical forecast model and an analysis/assimilation system to initialize the model. It is run with a horizontal resolution of 3 km, has 50 vertical levels, and has a temporal resolution of 15 minutes. The HRRR requires an Environmental Data Exchange (EDEX) server to receive the feed and generate secondary products out of it for the modeling. UCSD's EDEX server, funded by NSF, makes high-resolution weather data available to researchers worldwide and enables visualization of weather systems and weather events lasting months or even years. The high-speed server aggregates weather data from the University Consortium for Atmospheric Research by way of a subscription service from the Consortium called the Internet Data Distribution system. These features are part of WIFIRE's long term goals to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. Although Firemap is a research product of WIFIRE, developed in collaboration with a number of fire departments, the tool is operational in pilot form for providing big data-driven predictive fire spread modeling. Most recently, FireMap was used for situational awareness in the July 2016 Sand Fire by LA City and LA County Fire Departments.

  10. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    USGS Publications Warehouse

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  11. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  12. Level-2 Milestone 3244: Deploy Dawn ID Machine for Initial Science Runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, D

    2009-09-21

    This report documents the delivery, installation, integration, testing, and acceptance of the Dawn system, ASC L2 milestone 3244: Deploy Dawn ID Machine for Initial Science Runs, due September 30, 2009. The full text of the milestone is included in Attachment 1. The description of the milestone is: This milestone will be a result of work started three years ago with the planning for a multi-petaFLOPS UQ-focused platform (Sequoia) and will be satisfied when a smaller ID version of the final system is delivered, installed, integrated, tested, accepted, and deployed at LLNL for initial science runs in support of SSP mission.more » The deliverable for this milestone will be a LA petascale computing system (named Dawn) usable for code development and scaling necessary to ensure effective use of a final Sequoia platform (expected in 2011-2012), and for urgent SSP program needs. Allocation and scheduling of Dawn as an LA system will likely be performed informally, similar to what has been used for BlueGene/L. However, provision will be made to allow for dedicated access times for application scaling studies across the entire Dawn resource. The milestone was completed on April 1, 2009, when science runs began running on the Dawn system. The following sections describe the Dawn system architecture, current status, installation and integration time line, and testing and acceptance process. A project plan is included as Attachment 2. Attachment 3 is a letter certifying the handoff of the system to a nuclear weapons stockpile customer. Attachment 4 presents the results of science runs completed on the system.« less

  13. The Principal Spectrum Runs from Initiators to Responders.

    ERIC Educational Resources Information Center

    Rutherford, William L.

    1990-01-01

    Most principals are either initiators, responders, or managers. Initiators seek out new information, take control of situations, and intervene to support teachers or correct problems. Responders ignore new information, have vague goals, and enact programs initiated by the central office. Manager principals have a hybrid style that generally…

  14. Specific Adaptation of Gas Atomization Processing for Al-Based Alloy Powder for Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Iver; Siemon, John

    The initial three atomization attempts resulted in “freeze-outs” within the pour tubes in the pilot-scale system and yielded no powder. Re-evaluation of the alloy liquidus temperatures and melting characteristics, in collaboration with Alcoa, showed further superheat to be necessary to allow the liquid metal to flow through the pour tube to the atomization nozzle. A subsequent smaller run on the experimental atomization system verified these parameters and was successful, as were all successive runs on the larger pilot scale system. One alloy composition froze-out part way through the atomization on both pilot scale runs. SEM images showed needle formation andmore » phase segregations within the microstructure. Analysis of the pour tube freeze-out microstructures showed that large needles formed within the pour tube during the atomization experiment, which eventually blocked the melt stream. Alcoa verified the needle formation in this alloy using theoretical modeling of phase solidification. Sufficient powder of this composition was still generated to allow powder characterization and additive manufacturing trials at Alcoa.« less

  15. Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Pohlmann, Holger; Kröger, Jürgen; Greatbatch, Richard J.; Müller, Wolfgang A.

    2017-10-01

    Low prediction skill in the tropical Pacific is a common problem in decadal prediction systems, especially for lead years 2-5 which, in many systems, is lower than in uninitialized experiments. On the other hand, the tropical Pacific is of almost worldwide climate relevance through its teleconnections with other tropical and extratropical regions and also of importance for global mean temperature. Understanding the causes of the reduced prediction skill is thus of major interest for decadal climate predictions. We look into the problem of reduced prediction skill by analyzing the Max Planck Institute Earth System Model (MPI-ESM) decadal hindcasts for the fifth phase of the Climate Model Intercomparison Project and performing a sensitivity experiment in which hindcasts are initialized from a model run forced only by surface wind stress. In both systems, sea surface temperature variability in the tropical Pacific is successfully initialized, but most skill is lost at lead years 2-5. Utilizing the sensitivity experiment enables us to pin down the reason for the reduced prediction skill in MPI-ESM to errors in wind stress used for the initialization. A spurious trend in the wind stress forcing displaces the equatorial thermocline in MPI-ESM unrealistically. When the climate model is then switched into its forecast mode, the recovery process triggers artificial El Niño and La Niña events at the surface. Our results demonstrate the importance of realistic wind stress products for the initialization of decadal predictions.

  16. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  17. Convective aggregation in realistic convective-scale simulations

    NASA Astrophysics Data System (ADS)

    Holloway, Christopher E.

    2017-06-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.Plain Language SummaryUnderstanding the processes that lead to the organization of tropical rainstorms is an important challenge for weather forecasters and climate scientists. Over the last 20 years, idealized models of the tropical atmosphere have shown that tropical rainstorms can spontaneously clump together. These studies have linked this spontaneous organization to processes related to the interaction between the rainstorms, atmospheric water vapor, clouds, radiation, surface evaporation, and circulations. The present study shows that there are some similarities in how organization of rainfall in more realistic computer model simulations interacts with these processes (particularly radiation). This provides some evidence that the work in the idealized model studies is relevant to the organization of tropical rainstorms in the real world.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhR...525....1A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhR...525....1A"><span>Understanding quantum measurement from the solution of dynamical models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Allahverdyan, Armen E.; Balian, Roger; Nieuwenhuizen, Theo M.</p> <p>2013-04-01</p> <p>The quantum measurement problem, to wit, understanding why a unique outcome is obtained in each individual experiment, is currently tackled by solving models. After an introduction we review the many dynamical models proposed over the years for elucidating quantum measurements. The approaches range from standard quantum theory, relying for instance on quantum statistical mechanics or on decoherence, to quantum-classical methods, to consistent histories and to modifications of the theory. Next, a flexible and rather realistic quantum model is introduced, describing the measurement of the z-component of a spin through interaction with a magnetic memory simulated by a Curie-Weiss magnet, including N≫1 spins weakly coupled to a phonon bath. Initially prepared in a metastable paramagnetic state, it may transit to its up or down ferromagnetic state, triggered by its coupling with the tested spin, so that its magnetization acts as a pointer. A detailed solution of the dynamical equations is worked out, exhibiting several time scales. Conditions on the parameters of the model are found, which ensure that the process satisfies all the features of ideal measurements. Various imperfections of the measurement are discussed, as well as attempts of incompatible measurements. The first steps consist in the solution of the Hamiltonian dynamics for the spin-apparatus density matrix Dˆ(t). Its off-diagonal blocks in a basis selected by the spin-pointer coupling, rapidly decay owing to the many degrees of freedom of the pointer. Recurrences are ruled out either by some randomness of that coupling, or by the interaction with the bath. On a longer time scale, the trend towards equilibrium of the magnet produces a final state Dˆ(t) that involves correlations between the system and the indications of the pointer, thus ensuring registration. Although Dˆ(t) has the form expected for ideal measurements, it only describes a large set of runs. Individual runs are approached by analyzing the final states associated with all possible subensembles of runs, within a specified version of the statistical interpretation. There the difficulty lies in a quantum ambiguity: There exist many incompatible decompositions of the density matrix Dˆ(t) into a sum of sub-matrices, so that one cannot infer from its sole determination the states that would describe small subsets of runs. This difficulty is overcome by dynamics due to suitable interactions within the apparatus, which produce a special combination of relaxation and decoherence associated with the broken invariance of the pointer. Any subset of runs thus reaches over a brief delay a stable state which satisfies the same hierarchic property as in classical probability theory; the reduction of the state for each individual run follows. Standard quantum statistical mechanics alone appears sufficient to explain the occurrence of a unique answer in each run and the emergence of classicality in a measurement process. Finally, pedagogical exercises are proposed and lessons for future works on models are suggested, while the statistical interpretation is promoted for teaching.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/2855','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/2855"><span>Run-Off-Road Collision Avoidance Countermeasures Using IVHS Countermeasures, Task 3, Volume 2, Final Report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1995-08-01</p> <p>INTELLIGENT VEHICLE INITIATIVE OR IVI : THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. :...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22988629','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22988629"><span>Development of water movement model as a module of moisture content simulation in static pile composting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko</p> <p>2012-01-01</p> <p>This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3974810','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3974810"><span>Precise Maps of RNA Polymerase Reveal How Promoters Direct Initiation and Pausing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kwak, Hojoong; Fuda, Nicholas J.; Core, Leighton J.; Lis, John T.</p> <p>2014-01-01</p> <p>Transcription regulation occurs frequently through promoter-associated pausing of RNA polymerase II (Pol II). We developed a Precision nuclear Run-On and sequencing assay (PRO-seq) to map the genome-wide distribution of transcriptionally-engaged Pol II at base-pair resolution. Pol II accumulates immediately downstream of promoters, at intron-exon junctions that are efficiently used for splicing, and over 3' poly-adenylation sites. Focused analyses of promoters reveal that pausing is not fixed relative to initiation sites nor is it specified directly by the position of a particular core promoter element or the first nucleosome. Core promoter elements function beyond initiation, and when optimally positioned they act collectively to dictate the position and strength of pausing. We test this ‘Complex Interaction’ model with insertional mutagenesis of the Drosophila Hsp70 core promoter. PMID:23430654</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.H23N1077W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.H23N1077W"><span>Quasi-decadal Oscillation in the CMIP5 and CMIP3 Climate Model Simulations: California Case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, J.; Yin, H.; Reyes, E.; Chung, F. I.</p> <p>2014-12-01</p> <p>The ongoing three drought years in California are reminding us of two other historical long drought periods: 1987-1992 and 1928-1934. This kind of interannual variability is corresponding to the dominating 7-15 yr quasi-decadal oscillation in precipitation and streamflow in California. When using global climate model projections to assess the climate change impact on water resources planning in California, it is natural to ask if global climate models are able to reproduce the observed interannual variability like 7-15 yr quasi-decadal oscillation. Further spectral analysis to tree ring retrieved precipitation and historical precipitation record proves the existence of 7-15 yr quasi-decadal oscillation in California. But while implementing spectral analysis to all the CMIP5 and CMIP3 global climate model historical simulations using wavelet analysis approach, it was found that only two models in CMIP3 , CGCM 2.3.2a of MRI and NCAP PCM1.0, and only two models in CMIP5, MIROC5 and CESM1-WACCM, have statistically significant 7-15 yr quasi-decadal oscillations in California. More interesting, the existence of 7-15 yr quasi-decadal oscillation in the global climate model simulation is also sensitive to initial conditions. 12-13 yr quasi-decadal oscillation occurs in one ensemble run of CGCM 2.3.2a of MRI but does not exist in the other four ensemble runs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/7601865','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/7601865"><span>Direct dynamics simulation of the impact phase in heel-toe running.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gerritsen, K G; van den Bogert, A J; Nigg, B M</p> <p>1995-06-01</p> <p>The influence of muscle activation, position and velocities of body segments at touchdown and surface properties on impact forces during heel-toe running was investigated using a direct dynamics simulation technique. The runner was represented by a two-dimensional four- (rigid body) segment musculo-skeletal model. Incorporated into the muscle model were activation dynamics, force-length and force-velocity characteristics of seven major muscle groups of the lower extremities: mm. glutei, hamstrings, m. rectus femoris, mm. vasti, m. gastrocnemius, m. soleus and m. tibialis anterior. The vertical force-deformation characteristics of heel, shoe and ground were modeled by a non-linear visco-elastic element. The maximum of a typical simulated impact force was 1.6 times body weight. The influence of muscle activation was examined by generating muscle stimulation combinations which produce the same (experimentally determined) resultant joint moments at heelstrike. Simulated impact peak forces with these different combinations of muscle stimulation levels varied less than 10%. Without this restriction on initial joint moments, muscle activation had potentially a much larger effect on impact force. Impact peak force was to a great extent influenced by plantar flexion (85 N per degree of change in foot angle) and vertical velocity of the heel (212 N per 0.1 m s-1 change in velocity) at touchdown. Initial knee flexion (68 N per degree of change in leg angle) also played a role in the absorption of impact. Increased surface stiffness resulted in higher impact peak forces (60 N mm-1 decrease in deformation).(ABSTRACT TRUNCATED AT 250 WORDS)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMGC42C..01A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMGC42C..01A"><span>The Jormungand Global Climate State and Implications for the Neoproterozoic Snowball Paradox (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abbot, D. S.; Voigt, A.; Koll, D.; Pierrehumbert, R. T.</p> <p>2010-12-01</p> <p>We present a previously undescribed global climate state, the Jormungand state, that is nearly ice-covered with a narrow (~10-15 degrees of latitude) strip of open ocean near the equator. This state is sustained by internal dynamics of the hydrological cycle and the cryosphere. There is a new bifurcation in global climate climate associated with the Jormungand state that leads to significant hysteresis. We investigate the Jormungand state in a coupled ocean-atmosphere GCM, in multiple atmospheric GCMs coupled to a mixed layer ocean run in an idealized configuration, and we make a simple modification to the Budyko-Sellers model so that it produces Jormungand states. We suggest that the Jormungand state may be a better model for the Neoproterozoic glaciations (~635 Ma and ~715 Ma) than either the hard Snowball or the Slushball models. A Jormungand state would have a large enough region of open ocean near the equator to explain the micropaleontological and molecular clock evidence that photosynthetic eukaryotes thrived both before and immediately after the Neoproterozoic episodes. Additionally, since there is significant hysteresis associated with the Jormungand state, it can explain the cap carbonate sequences, the oxygen isotopic evidence that suggests high CO2 values, and the various evidence that suggests lifetimes for the glaciations of 1 Myrs or more. Since there is not significant hysteresis associated with the Slushball model, the Slushball model cannot explain these observations. Finally, we note that although the Slushball and Jormungand models share the characteristic of open ocean in the tropics, the Jormungand state is produced by entirely different physics, is entered through a new bifurcation in global climate, and is associated with significant hysteresis. Bifurcation diagram of global climate in the CAM global climate model, run with no continents, a 50 m mixed layer with no ocean heat transport, an eccentricity of zero, and annually and diurnally-varying insolation with a solar constant of 94% of present value. Red diamonds denote simulations initiated from ice-free conditions, blue circles denote simulations initiated from the Jormungand state, and green squares denote simulations initiated from the Snowball state. The black curve shows model equilibria, with dotted unstable solution branches (separatrices) and bifurcations drawn schematically.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA512569','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA512569"><span>West Adriatic Coastal Water Excursions into the East Adriatic</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2009-01-15</p> <p>anticyclonic eddies in the Gulf of Manfredonia which can form in the lee of the WAC flow around Cape Gargano (Burrage et al., 2009-this issue), although the...caused it to remain trapped in the lee of Cape Gargano. In the presence of stepwise bathymetry only (SW2 runs, Fig. 16), the initial flow was generally...L., Wang, J.D., Lee , T.N., 1996. The fate of river discharge on the continental shelf: 1. Modeling the river plume and the inner shelf coastal</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1258345','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1258345"><span>Experimental particle physics research at Texas Tech University</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Akchurin, Nural; Lee, Sung-Won; Volobouev, Igor</p> <p></p> <p>The high energy physics group at Texas Tech University (TTU) concentrates its research efforts on the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) and on generic detector R&D for future applications. Our research programs have been continuously supported by the US Department of Energy for over two decades, and this final report summarizes our achievements during the last grant period from May 1, 2012 to March 31, 2016. After having completed the Run 1 data analyses from the CMS detector, including the discovery of the Higgs boson in July 2012, we concentrated on commissioning the CMSmore » hadron calorimeter (HCAL) for Run 2, performing analyses of Run 2 data, and making initial studies and plans for the second phase of upgrades in CMS. Our research has primarily focused on searches for Beyond Standard Model (BSM) physics via dijets, monophotons, and monojets. We also made significant contributions to the analyses of the semileptonic Higgs decays and Standard Model (SM) measurements in Run 1. Our work on the operations of the CMS detector, especially the performance monitoring of the HCAL in Run 1, was indispensable to the experiment. Our team members, holding leadership positions in HCAL, have played key roles in the R&D, construction, and commissioning of these detectors in the last decade. We also maintained an active program in jet studies that builds on our expertise in calorimetry and algorithm development. In Run 2, we extended some of our analyses at 8 TeV to 13 TeV, and we also started to investigate new territory, e.g., dark matter searches with unexplored signatures. The objective of dual-readout calorimetry R&D was intended to explore (and, if possible, eliminate) the obstacles that prevent calorimetric detection of hadrons and jets with a comparable level of precision as we have grown accustomed to for electrons and photons. The initial prototype detector was successfully tested at the SPS/CERN in 2003-2004 and evolved over the last decade. In 2012-2015, several other prototypes were built to further reduce leakage fluctuations, improve Cherenkov light yield, increase fiber attenuation length, and other related phenomena. During this grant period, we graduated two students with Ph.D. degrees, and five undergraduate students from our labs went on to prestigious graduate programs in the US and Europe. Also, the TTU HEP team has participated in the QuarkNet program every year since 2001. We are dedicated to working with area teachers and students at all levels and to training the next generation of scientists. Over 20 high school teachers have participated in our program since its inception.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21565440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21565440"><span>Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos</p> <p>2011-09-01</p> <p>The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28986390','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28986390"><span>Relationship between 1.5-mile run time, injury risk and training outcome in British Army recruits.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hall, Lianne J</p> <p>2017-12-01</p> <p>1.5-mile run time, as a surrogate measure of aerobic fitness, is associated with musculoskeletal injury (MSI) risk in military recruits. This study aimed to determine if 1.5-mile run times can predict injury risk and attrition rates from phase 1 (initial) training and determine if a link exists between phase 1 and 2 discharge outcomes in British Army recruits. 1.5-mile times from week 1 of initial training and MSI reported during training were retrieved for 3446 male recruits. Run times were examined against injury occurrence and training outcomes for 3050 recruits, using a Binary Logistic Regression and χ 2 analysis. The 1.5-mile run can predict injury risk and phase 1 attrition rates (χ 2 (1)=59.3 p<0.001, χ 2 (1)=66.873 p<0.001). Slower 1.5-mile run times were associated with higher injury occurrence (χ 2 (1)=59.3 p<0.001) and reduced phase 1 ( χ 2 104.609 a p<0.001) and 2 (χ 2 84.978 a p<0.001) success. The 1.5-mile run can be used to guide a future standard that will in turn help reduce injury occurrence and improve training success. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28003554','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28003554"><span>Reliability of Vibrating Mesh Technology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gowda, Ashwin A; Cuccia, Ann D; Smaldone, Gerald C</p> <p>2017-01-01</p> <p>For delivery of inhaled aerosols, vibrating mesh systems are more efficient than jet nebulizers are and do not require added gas flow. We assessed the reliability of a vibrating mesh nebulizer (Aerogen Solo, Aerogen Ltd, Galway Ireland) suitable for use in mechanical ventilation. An initial observational study was performed with 6 nebulizers to determine run time and efficiency using normal saline and distilled water. Nebulizers were run until cessation of aerosol production was noted, with residual volume and run time recorded. Three controllers were used to assess the impact of the controller on nebulizer function. Following the observational study, a more detailed experimental protocol was performed using 20 nebulizers. For this analysis, 2 controllers were used, and time to cessation of aerosol production was noted. Gravimetric techniques were used to measure residual volume. Total nebulization time and residual volume were recorded. Failure was defined as premature cessation of aerosol production represented by residual volume of > 10% of the nebulizer charge. In the initial observational protocol, an unexpected sporadic failure rate was noted of 25% in 55 experimental runs. In the experimental protocol, a failure rate was noted of 30% in 40 experimental runs. Failed runs in the experimental protocol exhibited a wide range of retained volume averaging ± SD 36 ± 21.3% compared with 3.2 ± 1.5% (P = .001) in successful runs. Small but significant differences existed in nebulization time between controllers. Aerogen Solo nebulization was often randomly interrupted with a wide range of retained volumes. Copyright © 2017 by Daedalus Enterprises.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/951766','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/951766"><span>EnergyPlus Run Time Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hong, Tianzhen; Buhl, Fred; Haves, Philip</p> <p>2008-09-20</p> <p>EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJMPA..3250129O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJMPA..3250129O"><span>Gravitational baryogenesis in running vacuum models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oikonomou, V. K.; Pan, Supriya; Nunes, Rafael C.</p> <p>2017-08-01</p> <p>We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless of whether these models can produce a viable cosmological evolution, we demonstrate that they produce a nonzero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..SHK.T2001C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..SHK.T2001C"><span>Ignition and Growth Reactive Flow Modeling of Shock Initiation of PBX 9502 at -55∘C and -196∘C</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chidester, Steven; Tarver, Craig</p> <p>2015-06-01</p> <p>Recently Gustavsen et al. and Hollowell et al. published two stage gas gun embedded particle velocity gauge experiments on PBX 9502 (95%TATB, 5% Kel-F800) cooled to -55°C and -196°C, respectively. At -196°C, PBX 9502 was shown to be much less shock sensitive than at -55°C, but it did transition to detonation. Previous Ignition and Growth model parameters for shock initiation of PBX 9502 at -55°C are modified based on the new data, and new parameters for -196°C PBX 9502 are created to accurately simulate the measured particle velocity histories and run distances to detonation versus shock pressures. This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013P%26SS...82...11H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013P%26SS...82...11H"><span>Modeling granular material flows: The angle of repose, fluidization and the cliff collapse problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Holsapple, Keith A.</p> <p>2013-07-01</p> <p>I discuss theories of granular material flows, with application to granular flows on the earth and planets. There are two goals. First, there is a lingering belief of some that the standard continuum plasticity Mohr-Coulomb and/or Drucker-Prager models are not adequate for many large-scale granular flow problems. The stated reason for those beliefs is the fact that the final slopes of the run-outs in collapse, landslide problems, and large-scale cratering are well below the angle of repose of the material. That observation, combined with the supposition that in those models flow cannot occur with slopes less than the angle of repose, has led to a number of researchers suggesting a need for lubrication or fluidization mechanisms and modeling. That issue is investigated in detail and shown to be false. A complete analysis of slope failures according to the Mohr-Coulomb model is presented, with special attention to the relations between the angle of repose and slope failures. It is shown that slope failure can occur for slope angles both larger than and smaller than the angle of repose. Second, to study the details of landslide run-outs, finite-difference continuum code simulations of the prototypical cliff collapse problem, using the classical plasticity models, are presented, analyzed and compared to experiments. Although devoid of any additional fluidization models, those simulations match experiments in the literature extremely well. The dynamics of this problem introduces additional important features relating to the run-out and final slope angles. The vertical free surface begins to fall at the initial 90° and flow continues to a final slope less than 10°. The detail in the calculation is examined to show why flow persists at slope angles that appear to be less than the angle of repose. The motions include regions of solid-like, fluid-like, and gas-like flows without invoking any additional models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24290667','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24290667"><span>[Potentials of cooperative quality management initiatives: BQS Institute projects, January 2010 - July 2013].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Veit, Christof; Bungard, Sven; Hertle, Dagmar; Grothaus, Franz-Josef; Kötting, Joachim; Arnold, Nicolai</p> <p>2013-01-01</p> <p>Alongside the projects of internal quality management and mandatory quality assurance there is a variety of quality driven projects across institutions initiated and run by various partners to continuously improve the quality of care. The multiplicity and characteristics of these projects are discussed on the basis of projects run by the BQS Institute between 2010 and 2013. In addition, useful interactions and linking with mandatory quality benchmarking and with internal quality management are discussed. (As supplied by publisher). Copyright © 2013. Published by Elsevier GmbH.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013GMDD....6.6659P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013GMDD....6.6659P"><span>Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.</p> <p>2013-12-01</p> <p>Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1915719A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1915719A"><span>A framework for improving a seasonal hydrological forecasting system using sensitivity analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arnal, Louise; Pappenberger, Florian; Smith, Paul; Cloke, Hannah</p> <p>2017-04-01</p> <p>Seasonal streamflow forecasts are of great value for the socio-economic sector, for applications such as navigation, flood and drought mitigation and reservoir management for hydropower generation and water allocation to agriculture and drinking water. However, as we speak, the performance of dynamical seasonal hydrological forecasting systems (systems based on running seasonal meteorological forecasts through a hydrological model to produce seasonal hydrological forecasts) is still limited in space and time. In this context, the ESP (Ensemble Streamflow Prediction) remains an attractive forecasting method for seasonal streamflow forecasting as it relies on forcing a hydrological model (starting from the latest observed or simulated initial hydrological conditions) with historical meteorological observations. This makes it cheaper to run than a standard dynamical seasonal hydrological forecasting system, for which the seasonal meteorological forecasts will first have to be produced, while still producing skilful forecasts. There is thus the need to focus resources and time towards improvements in dynamical seasonal hydrological forecasting systems which will eventually lead to significant improvements in the skill of the streamflow forecasts generated. Sensitivity analyses are a powerful tool that can be used to disentangle the relative contributions of the two main sources of errors in seasonal streamflow forecasts, namely the initial hydrological conditions (IHC; e.g., soil moisture, snow cover, initial streamflow, among others) and the meteorological forcing (MF; i.e., seasonal meteorological forecasts of precipitation and temperature, input to the hydrological model). Sensitivity analyses are however most useful if they inform and change current operational practices. To this end, we propose a method to improve the design of a seasonal hydrological forecasting system. This method is based on sensitivity analyses, informing the forecasters as to which element of the forecasting chain (i.e., IHC or MF) could potentially lead to the highest increase in seasonal hydrological forecasting performance, after each forecast update.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNH11D..05G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNH11D..05G"><span>Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>George, D. L.; Iverson, R. M.</p> <p>2012-12-01</p> <p>Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007IJNMF..53.1381V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007IJNMF..53.1381V"><span>A numerical strategy for modelling rotating stall in core compressors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vahdati, M.</p> <p>2007-03-01</p> <p>The paper will focus on one specific core-compressor instability, rotating stall, because of the pressing industrial need to improve current design methods. The determination of the blade response during rotating stall is a difficult problem for which there is no reliable procedure. During rotating stall, the blades encounter the stall cells and the excitation depends on the number, size, exact shape and rotational speed of these cells. The long-term aim is to minimize the forced response due to rotating stall excitation by avoiding potential matches between the vibration modes and the rotating stall pattern characteristics. Accurate numerical simulations of core-compressor rotating stall phenomena require the modelling of a large number of bladerows using grids containing several tens of millions of points. The time-accurate unsteady-flow computations may need to be run for several engine revolutions for rotating stall to get initiated and many more before it is fully developed. The difficulty in rotating stall initiation arises from a lack of representation of the triggering disturbances which are inherently present in aeroengines. Since the numerical model represents a symmetric assembly, the only random mechanism for rotating stall initiation is provided by numerical round-off errors. In this work, rotating stall is initiated by introducing a small amount of geometric mistuning to the rotor blades. Another major obstacle in modelling flows near stall is the specification of appropriate upstream and downstream boundary conditions. Obtaining reliable boundary conditions for such flows can be very difficult. In the present study, the low-pressure compression (LPC) domain is placed upstream of the core compressor. With such an approach, only far field atmospheric boundary conditions are specified which are obtained from aircraft speed and altitude. A chocked variable-area nozzle, placed after the last compressor bladerow in the model, is used to impose boundary conditions downstream. Such an approach is representative of modelling an engine.Using a 3D viscous time-accurate flow representation, the front bladerows of a core compressor were modelled in a whole-annulus fashion whereas the rest of bladerows are modelled in a single-passage fashion. The rotating stall behaviour at two different compressor operating points was studied by considering two different variable-vane scheduling conditions for which experimental data were available. Using a model with nine whole-assembly models, the unsteady-flow calculations were conducted on 32-CPUs of a parallel cluster, typical run times being around 3-4 weeks for a grid with about 60 million points. The simulations were conducted over several engine rotations. As observed on the actual development engine, there was no rotating stall for the first scheduling condition while mal-scheduling of the stator vanes created a 12-band rotating stall which excited the 1st flap mode.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHyd..561..509N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHyd..561..509N"><span>Perturbations in the initial soil moisture conditions: Impacts on hydrologic simulation in a large river basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal</p> <p>2018-06-01</p> <p>Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008ApJS..174..145G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008ApJS..174..145G"><span>Axisymmetric Shearing Box Models of Magnetized Disks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guan, Xiaoyue; Gammie, Charles F.</p> <p>2008-01-01</p> <p>The local model, or shearing box, has proven a useful model for studying the dynamics of astrophysical disks. Here we consider the evolution of magnetohydrodynamic (MHD) turbulence in an axisymmetric local model in order to evaluate the limitations of global axisymmetric models. An exploration of the model parameter space shows the following: (1) The magnetic energy and α-decay approximately exponentially after an initial burst of turbulence. For our code, HAM, the decay time τ propto Res , where Res/2 is the number of zones per scale height. (2) In the initial burst of turbulence the magnetic energy is amplified by a factor proportional to Res3/4λR, where λR is the radial scale of the initial field. This scaling applies only if the most unstable wavelength of the magnetorotational instability is resolved and the final field is subthermal. (3) The shearing box is a resonant cavity and in linear theory exhibits a discrete set of compressive modes. These modes are excited by the MHD turbulence and are visible as quasi-periodic oscillations (QPOs) in temporal power spectra of fluid variables at low spatial resolution. At high resolution the QPOs are hidden by a noise continuum. (4) In axisymmetry disk turbulence is local. The correlation function of the turbulence is limited in radial extent, and the peak magnetic energy density is independent of the radial extent of the box LR for LR > 2H. (5) Similar results are obtained for the HAM, ZEUS, and ATHENA codes; ATHENA has an effective resolution that is nearly double that of HAM and ZEUS. (6) Similar results are obtained for 2D and 3D runs at similar resolution, but only for particular choices of the initial field strength and radial scale of the initial magnetic field.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20100014816&hterms=drought&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddrought','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20100014816&hterms=drought&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddrought"><span>An Assessment of the Potential Predictability of Drought Over the United States Based on Climate Model Simulations with Specified SST</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schubert, Siegfried; Wang, Hailan; Suarez, Max; Koster, Randal</p> <p>2010-01-01</p> <p>The USCLIV AR working group on drought recently initiated a series of global climate model simulations forced with idealized SST anomaly patterns, designed to address a number of uncertainties regarding the impact of SST forcing and the role of land-atmosphere feedbacks on regional drought. The runs were done with several global atmospheric models including NASA/NSIPP-l, NCEP/GFS, GFDLlAM2, and NCAR CCM3 and CAM3.5. Specific questions that the runs are designed to address include: What are mechanisms that maintain drought across the seasonal cycle and from one year to the next. To what extent can droughts develop independently of ocean variability due to year-to-year memory that may be inherent to the land. What is the role of the different ocean basins? Here we focus on the potential predictability of drought conditions over the United States. Specific issues addressed include the seasonality and regionality of the signal-to-noise ratios associated with Pacific and Atlantic SST forcing, and the sensitivity of the results to the climatological stationary waves simulated by the different AGCMs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030093717','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030093717"><span>The Madden-Julian Oscillation and its Impact on Northern Hemisphere Weather Predictability during Wintertime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, Charles; Waliser, Duane E.; Lau, K. M.; Stern, W.</p> <p>2003-01-01</p> <p>The Madden-Julian Oscillation (MJO) is known as the dominant mode of tropical intraseasonal variability and has an important role in the coupled-atmosphere system. This study used twin numerical model experiments to investigate the influence of the MJO activity on weather predictability in the midlatitudes of the Northern Hemisphere during boreal winter. The National Aeronautics and Space Administration (NASA) Goddard laboratory for the Atmospheres (GLA) general circulation model was first used in a 10-yr simulation with fixed climatological SSTs to generate a validation data set as well as to select initial conditions for active MJO periods and Null cases. Two perturbation numerical experiments were performed for the 75 cases selected [(4 MJO phases + Null phase) _ 15 initial conditions in each]. For each alternative initial condition, the model was integrated for 90 days. Mean anomaly correlations in the midlatitudes of the Northern Hemisphere (2O deg N_60 deg.N) and standardized root-mean-square errors were computed to validate forecasts and control run. The analyses of 500-hPa geopotential height, 200-hPa Streamfunction and 850-hPa zonal wind component systematically show larger predictability during periods of active MJO as opposed to quiescent episodes of the oscillation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130000587','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130000587"><span>Recent Upgrades to NASA SPoRT Initialization Datasets for the Environmental Modeling System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Zavodsky, Bradley T.; Rozumalski, Robert A.</p> <p>2012-01-01</p> <p>The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can initialize specific fields for local model runs within the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS). In last year's NWA abstract on this topic, the suite of SPoRT products supported in the STRC EMS was presented, which includes a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Green Vegetation Fraction (GVF) composite, and NASA Land Information System (LIS) gridded output. This abstract and companion presentation describes recent upgrades made to the SST and GVF composites, as well as the real-time LIS runs. The Great Lakes sea-ice product is unchanged from 2011. The SPoRT SST composite product has been expanded geographically and as a result, the resolution has been coarsened from 1 km to 2 km to accommodate the larger domain. The expanded domain covers much of the northern hemisphere from eastern Asia to western Europe (0 N to 80 N latitude and 150 E to 10 E longitude). In addition, the NESDIS POES-GOES product was added to fill in gaps caused by the Moderate Resolution Imaging Spectroradiometer (MODIS) being unable to sense in cloudy regions, replacing the recently-lost Advanced Microwave Scanning Radiometer for EOS with negligible change to product fidelity. The SST product now runs twice per day for Terra and Aqua combined data collections from 0000 to 1200 UTC and from 1200 to 0000 UTC, with valid analysis times at 0600 and 1800 UTC. The twice-daily compositing technique reduces the overall latency of the previous version while still representing the diurnal cycle characteristics. The SST composites are available at approximately four hours after the end of each collection period (i.e. 1600 UTC for the nighttime analysis and 0400 UTC for the daytime analysis). The real-time MODIS GVF composite has only received minor updates in the past year. The domain was expanded slightly to extend further west, north, and east to improve coverage over parts of southern Canada. Minor adjustments were also made to the manner in which GVF is calculated from the distribution of maximum Normalized Difference Vegetation Index from MODIS. The presentation will highlight some examples of the substantial inter-annual change in GVF that occurred from 2010 to 2011 in the U.S. Southern Plains as a result of the summer 2011 drought, and the early vegetation green up across the eastern U.S. due to the very warm conditions in March 2012. Finally, the SPoRT LIS runs the operational Noah land surface model (LSM) in real time over much of the eastern half of the CONUS. The Noah LSM is continually cycled in real time, uncoupled to any model, and driven by operational atmospheric analyses over a long-term, multi-year integration. The LIS-Noah provides the STRC EMS with high-resolution (3 km) LSM initialization data that are in equilibrium with the operational analysis forcing. The Noah LSM within the SPoRT LIS has been upgraded from version 2.7.1 to version 3.2, which has improved look-up table attributes for several land surface quantities. The surface albedo field is now being adjusted based on the input real-time MODIS GVF, thereby improving the net radiation. Also, the LIS-Noah now uses the newer MODIS-based land use classification scheme (i.e. the International Biosphere-Geosphere Programme [IGBP]) that has a better depiction of urban corridors in areas where urban sprawl has occurred. STRC EMS users interested in initializing their LSM fields with high-resolution SPoRT LIS data should set up their model domain with the MODIS-IGBP 20-class land use database and select Noah as the LSM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18706563','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18706563"><span>A new dimensionless number highlighted from mechanical energy exchange during running.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Delattre, Nicolas; Moretto, Pierre</p> <p>2008-09-18</p> <p>This study aimed to highlight a new dimensionless number from mechanical energy transfer occurring at the centre of gravity (Cg) during running. We built two different-sized spring-mass models (SMM #1 and SMM #2). SMM #1 was built from the previously published data, and SMM #2 was built to be dynamically similar to SMM #1. The potential gravitational energy (E(P)), kinetic energy (E(K)), and potential elastic energy (E(E)) were taken into account to test our hypothesis. For both SMM #1 and SMM #2, N(Mo-Dela)=(E(P)+E(K))/E(E) reached the same mean value and was constant (4.1+/-0.7) between 30% and 70% of contact time. Values of N(Mo-Dela) obtained out of this time interval were due to the absence of E(E) at initial and final times of the simulation. This phenomenon does not occur during in vivo running because a leg muscle's pre-activation enables potential elastic energy storage prior to ground contact. Our findings also revealed that two different-sized spring-mass models bouncing with equal N(Mo-Dela) values moved in a dynamically similar fashion. N(Mo-Dela), which can be expressed by the combination of Strouhal and Froude numbers, could be of great interest in order to study animal and human locomotion under Earth's gravity or to induce dynamic similarity between different-sized individuals during bouncing gaits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013552&hterms=so2&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dso2','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013552&hterms=so2&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dso2"><span>Dispersion and Lifetime of the SO2 Cloud from the August 2008 Kasatochi Eruption</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Krotkov, N. A.; Schoeberl, M. R.; Morris, G. A.; Carn, S.; Yang, K.</p> <p>2010-01-01</p> <p>Hemispherical dispersion of the SO2 cloud from the August 2008 Kasatochi eruption is analyzed using satellite data from the Ozone Monitoring Instrument (OMI) and the Goddard Trajectory Model (GTM). The operational OMI retrievals underestimate the total SO2 mass by 20-30% on 8-11 August, as compared with more accurate offline Extended Iterative Spectral Fit (EISF) retrievals, but the error decreases with time due to plume dispersion and a drop in peak SO2 column densities. The GTM runs were initialized with and compared to the operational OMI SO2 data during early plume dispersion to constrain SO2 plume heights and eruption times. The most probable SO2 heights during initial dispersion are estimated to be 10-12 km, in agreement with direct height retrievals using EISF algorithm and IR measurements. Using these height constraints a forward GTM run was initialized on 11 August to compare with the month-long Kasatochi SO2 cloud dispersion patterns. Predicted volcanic cloud locations generally agree with OMI observations, although some discrepancies were observed. Operational OMI SO2 burdens were refined using GTM-predicted mass-weighted probability density height distributions. The total refined SO2 mass was integrated over the Northern Hemisphere to place empirical constraints on the SO2 chemical decay rate. The resulting lower limit of the Kasatochi SO2 e-folding time is approx.8-9 days. Extrapolation of the exponential decay back in time yields an initial erupted SO2 mass of approx.2.2 Tg on 8 August, twice as much as the measured mass on that day.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26433561','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26433561"><span>Wheel running exercise attenuates vulnerability to self-administer nicotine in rats.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sanchez, Victoria; Lycas, Matthew D; Lynch, Wendy J; Brunzell, Darlene H</p> <p>2015-11-01</p> <p>Preventing or postponing tobacco use initiation could greatly reduce the number of tobacco-related deaths. While evidence suggests that exercise is a promising treatment for tobacco addiction, it is not clear whether exercise could prevent initial vulnerability to tobacco use. Thus, using an animal model, we examined whether exercise attenuates vulnerability to the use and reinforcing effects of nicotine, the primary addictive chemical in tobacco. Initial vulnerability was assessed using an acquisition procedure wherein exercising (unlocked running wheel, n=10) and sedentary (locked or no wheel, n=12) male adolescent rats had access to nicotine infusions (0.01-mg/kg) during daily 21.5-h sessions beginning on postnatal day 30. Exercise/sedentary sessions (2-h/day) were conducted prior to each of the acquisition sessions. The effects of exercise on nicotine's reinforcing effects were further assessed in separate groups of exercising (unlocked wheel, n=7) and sedentary (no wheel, n=5) rats responding for nicotine under a progressive-ratio schedule with exercise/sedentary sessions (2-h/day) conducted before the daily progressive-ratio sessions. While high rates of acquisition of nicotine self-administration were observed among both groups of sedentary controls, acquisition was robustly attenuated in the exercise group with only 20% of exercising rats meeting the acquisition criterion within the 16-day testing period as compared to 67% of the sedentary controls. Exercise also decreased progressive-ratio responding for nicotine as compared to baseline and to sedentary controls. Exercise may effectively prevent the initiation of nicotine use in adolescents by reducing the reinforcing effects of nicotine. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ascl.soft02011D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ascl.soft02011D"><span>runDM: Running couplings of Dark Matter to the Standard Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo</p> <p>2018-02-01</p> <p>runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23091785','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23091785"><span>Changes in lower extremity movement and power absorption during forefoot striking and barefoot running.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Williams, D S Blaise; Green, Douglas H; Wurzinger, Brian</p> <p>2012-10-01</p> <p>Both forefoot strike shod (FFS) and barefoot (BF) running styles result in different mechanics when compared to rearfoot strike (RFS) shod running. Additionally, running mechanics of FFS and BF running are similar to one another. Comparing the mechanical changes occurring in each of these patterns is necessary to understand potential benefits and risks of these running styles. The authors hypothesized that FFS and BF conditions would result in increased sagittal plane joint angles at initial contact and that FFS and BF conditions would demonstrate a shift in sagittal plane joint power from the knee to the ankle when compared to the RFS condition. Finally, total lower extremity power absorption will be least in BF and greatest in the RFS shod condition. The study included 10 male and 10 female RFS runners who completed 3-dimensional running analysis in 3 conditions: shod with RFS, shod with FFS, and BF. Variables were the angles of plantarflexion, knee flexion, and hip flexion at initial contact and peak sagittal plane joint power at the hip, knee, and ankle during stance phase. Running with a FFS pattern and BF resulted in significantly greater plantarflexion and significantly less negative knee power (absorption) when compared to shod RFS condition. FFS condition runners landed in the most plantarflexion and demonstrated the most peak ankle power absorption and lowest knee power absorption between the 3 conditions. BF and FFS conditions demonstrated decreased total lower extremity power absorption compared to the shod RFS condition but did not differ from one another. BF and FFS running result in reduced total lower extremity power, hip power and knee power and a shift of power absorption from the knee to the ankle. Alterations associated with BF running patterns are present in a FFS pattern when wearing shoes. Additionally, both patterns result in increased demand at the foot and ankle as compared to the knee.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3474309','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3474309"><span>CHANGES IN LOWER EXTREMITY MOVEMENT AND POWER ABSORPTION DURING FOREFOOT STRIKING AND BAREFOOT RUNNING</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Green, Douglas H.; Wurzinger, Brian</p> <p>2012-01-01</p> <p>Purpose/Background: Both forefoot strike shod (FFS) and barefoot (BF) running styles result in different mechanics when compared to rearfoot strike (RFS) shod running. Additionally, running mechanics of FFS and BF running are similar to one another. Comparing the mechanical changes occurring in each of these patterns is necessary to understand potential benefits and risks of these running styles. The authors hypothesized that FFS and BF conditions would result in increased sagittal plane joint angles at initial contact and that FFS and BF conditions would demonstrate a shift in sagittal plane joint power from the knee to the ankle when compared to the RFS condition. Finally, total lower extremity power absorption will be least in BF and greatest in the RFS shod condition. Methods: The study included 10 male and 10 female RFS runners who completed 3‐dimensional running analysis in 3 conditions: shod with RFS, shod with FFS, and BF. Variables were the angles of plantarflexion, knee flexion, and hip flexion at initial contact and peak sagittal plane joint power at the hip, knee, and ankle during stance phase. Results: Running with a FFS pattern and BF resulted in significantly greater plantarflexion and significantly less negative knee power (absorption) when compared to shod RFS condition. FFS condition runners landed in the most plantarflexion and demonstrated the most peak ankle power absorption and lowest knee power absorption between the 3 conditions. BF and FFS conditions demonstrated decreased total lower extremity power absorption compared to the shod RFS condition but did not differ from one another. Conclusions: BF and FFS running result in reduced total lower extremity power, hip power and knee power and a shift of power absorption from the knee to the ankle. Clinical Relevance: Alterations associated with BF running patterns are present in a FFS pattern when wearing shoes. Additionally, both patterns result in increased demand at the foot and ankle as compared to the knee. PMID:23091785</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810017091','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810017091"><span>Summary of results of January climate simulations with the GISS coarse-mesh model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spar, J.; Cohen, C.; Wu, P.</p> <p>1981-01-01</p> <p>The large scale climates generated by extended runs of the model are relatively independent of the initial atmospheric conditions, if the first few months of each simulation are discarded. The perpetual January simulations with a specified SST field produced excessive snow accumulation over the continents of the Northern Hemisphere. Mass exchanges between the cold (warm) continents and the warm (cold) adjacent oceans produced significant surface pressure changes over the oceans as well as over the land. The effect of terrain and terrain elevation on the amount of precipitation was examined. The evaporation of continental moisture was calculated to cause large increases in precipitation over the continents.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMEP34A..05S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMEP34A..05S"><span>Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.</p> <p>2016-12-01</p> <p>Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AtmEn..39.1961S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AtmEn..39.1961S"><span>Distributed run of a one-dimensional model in a regional application using SOAP-based web services</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smiatek, Gerhard</p> <p></p> <p>This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28018127','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28018127"><span>Influence of "J"-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Runxiao; Zhao, Wentao; Li, Shujun; Zhang, Shunqi</p> <p>2016-01-01</p> <p>Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of "J"-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with "J"-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with "J"-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s -1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with "J"-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5150119','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5150119"><span>Influence of “J”-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of “J”-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with “J”-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with “J”-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s−1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with “J”-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness. PMID:28018127</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70189948','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70189948"><span>Phast4Windows: A 3D graphical user interface for the reactive-transport simulator PHAST</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Charlton, Scott R.; Parkhurst, David L.</p> <p>2013-01-01</p> <p>Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties—the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones—and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT.......280M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT.......280M"><span>Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Martynov, Denis</p> <p></p> <p>Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument. The coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. Static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype are described in the last part of this thesis. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed. Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about six months. Since current sensitivity of advanced LIGO is already more than a factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, the upcoming science runs have a good chance for the first direct detection of gravitational waves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMGC41F..07L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMGC41F..07L"><span>The Role of Soil Water and Land Feedbacks in Decadal Drought in Western North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Langford, S.; Chikamoto, Y.; Noone, D. C.</p> <p>2013-12-01</p> <p>Western North America is susceptible to severe impacts of megadroughts, as evidenced by tree-core or lake sediment records. Future predictions suggest that this region will become more arid, with further consequences for water resources. Understanding the mechanisms of drought variability and persistence in western North America is critical for the eventual development of effective forecasting methods. The ocean is expected to be the main source of decadal memory in the system as the atmosphere varies on a much shorter timescale. The ocean's role in driving the low-frequency variability of the system is potentially predictable. However, low-frequency precipitation anomalies in western North America can occur in the absence of ocean feedbacks. Sea surface temperature anomalies in the north Pacific Ocean only account for around 20 per cent of the low-frequency winter precipitation in California in the CMIP5 historical runs. This is not sufficient to use the skill of global coupled models in predicting ocean conditions ahead of time to successfully forecast the possibility of long-term drought in western North America. Megadroughts therefore may be generated by unpredictable atmospheric noise, or persisted by other sources of low-frequency variability such as land processes and feedbacks. Snowpack in western North America is a crucial water resource for the surrounding communities, storing the winter precipitation for use later in the year. Likewise, soil moisture integrates the precipitation signal; the time scale depends on the depth and characteristics of the soil. Water storage and related variables are more predictable on longer timescales than precipitation, as measured by anomaly correlation for hindcasts compared to a 'perfect model' control run with CESM1.0.3. The importance of antecedent land conditions in persisting megadroughts in western North America is explored with ensemble simulations of CESM1.0.3, where the atmosphere is perturbed at the initiation and peak of a megadrought in the control run. Numerical experiments are used to test land-atmosphere feedbacks or memory sources, highlighting the sensitivity of megadrought initiation, persistence and termination to these antecedent conditions. The model results confirm the importance of land processes in projections of future decadal hydroclimate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1257892','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1257892"><span>Stochasticity and efficiency of convection-dominated vs. SASI-dominated supernova explosions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Cardall, Christian Y.; Budiardja, Reuben D.</p> <p>2015-10-22</p> <p>We present an initial report on 160 simulations of a highly simplified model of the post-bounce supernova environment in three position space dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a `critical neutrino luminosity' for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of our SASI-dominated runs are more stochastic: a sharp threshold critical luminosity is `smeared out' into a rising probability of explosion over amore » $$\\sim 20\\%$$ range of luminosity. We also find that the SASI-dominated models are able to explode with 3 to 4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S"><span>Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soltanzadeh, I.; Azadi, M.; Vakili, G. A.</p> <p>2011-07-01</p> <p>Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26454024','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26454024"><span>The protective effects of free wheel-running against cocaine psychomotor sensitization persist after exercise cessation in C57BL/6J mice.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lespine, L-F; Tirelli, E</p> <p>2015-12-03</p> <p>Previous literature suggests that free access to a running wheel can attenuate the behavioral responsiveness to addictive drugs in rodents. In a few studies, wheel-running cessation accentuated drug responsiveness. Here, we tested whether free wheel-running cessation is followed by (1) an accentuation or (2) an attenuation of cocaine psychomotor sensitization, knowing that no cessation of (continuous) wheel-running is associated with an attenuation of cocaine responsiveness. Male C57BL/6J mice, aged 35 days, were housed singly either with (exercising mice) or without (non-exercising mice) a running wheel. At the end of a period of 36 days, half of the exercising mice were deprived of their wheel whereas the other half of exercising mice kept their wheel until the end of experimentation (which lasted 85 days). The non-exercising mice were housed without wheel throughout experimentation. Testing took place 3 days after exercise cessation. After 2 once-daily drug-free test sessions, mice were tested for initiation of psychomotor sensitization over 13 once-daily injections of 8 mg/kg cocaine. Post-sensitization conditioned activation (saline challenge) and long-term expression of sensitization were assessed 2 or 30 days after the last sensitizing injection (same treatments as for initiation of sensitization), respectively. Exercising mice and mice undergoing wheel-running cessation exhibited comparable degrees of attenuation of all cocaine effects in comparison with the continuously non-exercising mice, which showed the greatest effects. Thus, the efficaciousness of wheel-running at attenuating cocaine sensitization not only resisted to exercise cessation but was also unambiguously persistent (an important effect rarely reported in previous literature). Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25697150','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25697150"><span>Lower-volume muscle-damaging exercise protects against high-volume muscle-damaging exercise and the detrimental effects on endurance performance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Burt, Dean; Lamb, Kevin; Nicholas, Ceri; Twist, Craig</p> <p>2015-07-01</p> <p>This study examined whether lower-volume exercise-induced muscle damage (EIMD) performed 2 weeks before high-volume muscle-damaging exercise protects against its detrimental effect on running performance. Sixteen male participants were randomly assigned to a lower-volume (five sets of ten squats, n = 8) or high-volume (ten sets of ten squats, n = 8) EIMD group and completed baseline measurements for muscle soreness, knee extensor torque, creatine kinase (CK), a 5-min fixed-intensity running bout and a 3-km running time-trial. Measurements were repeated 24 and 48 h after EIMD, and the running time-trial after 48 h. Two weeks later, both groups repeated the baseline measurements, ten sets of ten squats and the same follow-up testing (Bout 2). Data analysis revealed increases in muscle soreness and CK and decreases in knee extensor torque 24-48 h after the initial bouts of EIMD. Increases in oxygen uptake [Formula: see text], minute ventilation [Formula: see text] and rating of perceived exertion were observed during fixed-intensity running 24-48 h after EIMD Bout 1. Likewise, time increased and speed and [Formula: see text] decreased during a 3-km running time-trial 48 h after EIMD. Symptoms of EIMD, responses during fixed-intensity and running time-trial were attenuated in the days after the repeated bout of high-volume EIMD performed 2 weeks after the initial bout. This study demonstrates that the protective effect of lower-volume EIMD on subsequent high-volume EIMD is transferable to endurance running. Furthermore, time-trial performance was found to be preserved after a repeated bout of EIMD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GMD....10.3085H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GMD....10.3085H"><span>Biogenic isoprene emissions driven by regional weather predictions using different initialization methods: case studies during the SEAC4RS and DISCOVER-AQ airborne campaigns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Min; Carmichael, Gregory R.; Crawford, James H.; Wisthaler, Armin; Zhan, Xiwu; Hain, Christopher R.; Lee, Pius; Guenther, Alex B.</p> <p>2017-08-01</p> <p>Land and atmospheric initial conditions of the Weather Research and Forecasting (WRF) model are often interpolated from a different model output. We perform case studies during NASA's SEAC4RS and DISCOVER-AQ Houston airborne campaigns, demonstrating that using land initial conditions directly downscaled from a coarser resolution dataset led to significant positive biases in the coupled NASA-Unified WRF (NUWRF, version 7) surface and near-surface air temperature and planetary boundary layer height (PBLH) around the Missouri Ozarks and Houston, Texas, as well as poorly partitioned latent and sensible heat fluxes. Replacing land initial conditions with the output from a long-term offline Land Information System (LIS) simulation can effectively reduce the positive biases in NUWRF surface air temperature by ˜ 2 °C. We also show that the LIS land initialization can modify surface air temperature errors almost 10 times as effectively as applying a different atmospheric initialization method. The LIS-NUWRF-based isoprene emission calculations by the Model of Emissions of Gases and Aerosols from Nature (MEGAN, version 2.1) are at least 20 % lower than those computed using the coarser resolution data-initialized NUWRF run, and are closer to aircraft-observation-derived emissions. Higher resolution MEGAN calculations are prone to amplified discrepancies with aircraft-observation-derived emissions on small scales. This is possibly a result of some limitations of MEGAN's parameterization and uncertainty in its inputs on small scales, as well as the representation error and the neglect of horizontal transport in deriving emissions from aircraft data. This study emphasizes the importance of proper land initialization to the coupled atmospheric weather modeling and the follow-on emission modeling. We anticipate it to also be critical to accurately representing other processes included in air quality modeling and chemical data assimilation. Having more confidence in the weather inputs is also beneficial for determining and quantifying the other sources of uncertainties (e.g., parameterization, other input data) of the models that they drive.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1437988-scidac-data-enabling-data-driven-modeling-exascale-computing','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1437988-scidac-data-enabling-data-driven-modeling-exascale-computing"><span>Scidac-Data: Enabling Data Driven Modeling of Exascale Computing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...</p> <p>2017-11-23</p> <p>Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.H23E1559D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.H23E1559D"><span>On the assimilation of satellite derived soil moisture in numerical weather prediction models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Drusch, M.</p> <p>2006-12-01</p> <p>Satellite derived surface soil moisture data sets are readily available and have been used successfully in hydrological applications. In many operational numerical weather prediction systems the initial soil moisture conditions are analysed from the modelled background and 2 m temperature and relative humidity. This approach has proven its efficiency to improve surface latent and sensible heat fluxes and consequently the forecast on large geographical domains. However, since soil moisture is not always related to screen level variables, model errors and uncertainties in the forcing data can accumulate in root zone soil moisture. Remotely sensed surface soil moisture is directly linked to the model's uppermost soil layer and therefore is a stronger constraint for the soil moisture analysis. Three data assimilation experiments with the Integrated Forecast System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF) have been performed for the two months period of June and July 2002: A control run based on the operational soil moisture analysis, an open loop run with freely evolving soil moisture, and an experimental run incorporating bias corrected TMI (TRMM Microwave Imager) derived soil moisture over the southern United States through a nudging scheme using 6-hourly departures. Apart from the soil moisture analysis, the system setup reflects the operational forecast configuration including the atmospheric 4D-Var analysis. Soil moisture analysed in the nudging experiment is the most accurate estimate when compared against in-situ observations from the Oklahoma Mesonet. The corresponding forecast for 2 m temperature and relative humidity is almost as accurate as in the control experiment. Furthermore, it is shown that the soil moisture analysis influences local weather parameters including the planetary boundary layer height and cloud coverage. The transferability of the results to other satellite derived soil moisture data sets will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.898f2048M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.898f2048M"><span>Scidac-Data: Enabling Data Driven Modeling of Exascale Computing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert</p> <p>2017-10-01</p> <p>The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1437988','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1437988"><span>Scidac-Data: Enabling Data Driven Modeling of Exascale Computing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo</p> <p></p> <p>Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19087944','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19087944"><span>Decadal climate prediction (project GCEP).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haines, Keith; Hermanson, Leon; Liu, Chunlei; Putt, Debbie; Sutton, Rowan; Iwi, Alan; Smith, Doug</p> <p>2009-03-13</p> <p>Decadal prediction uses climate models forced by changing greenhouse gases, as in the International Panel for Climate Change, but unlike longer range predictions they also require initialization with observations of the current climate. In particular, the upper-ocean heat content and circulation have a critical influence. Decadal prediction is still in its infancy and there is an urgent need to understand the important processes that determine predictability on these timescales. We have taken the first Hadley Centre Decadal Prediction System (DePreSys) and implemented it on several NERC institute compute clusters in order to study a wider range of initial condition impacts on decadal forecasting, eventually including the state of the land and cryosphere. The eScience methods are used to manage submission and output from the many ensemble model runs required to assess predictive skill. Early results suggest initial condition skill may extend for several years, even over land areas, but this depends sensitively on the definition used to measure skill, and alternatives are presented. The Grid for Coupled Ensemble Prediction (GCEP) system will allow the UK academic community to contribute to international experiments being planned to explore decadal climate predictability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..306a2001B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..306a2001B"><span>New Model of Information Technology Governance in the Government of Gorontalo City using Framework COBIT 4.1</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bouty, A. A.; Koniyo, M. H.; Novian, D.</p> <p>2018-02-01</p> <p>This study aims to determine the level of maturity of information technology governance in Gorontalo city government by applying the COBIT framework 4.1. The research method is the case study method, by conducting surveys and data collection at 25 institution in Gorontalo City. The results of this study is the analysis of information technology needs based on the measurement of maturity level. The results of the measurement of the maturity level of information technology governance shows that there are still many business processes running at lower level, from 9 existing business processes there are 4 processes at level 2 (repetitive but intuitive) and 3 processes at level 1 (Initial/Ad hoc). With these results, is expected that the government of Gorontalo city immediately make improvements to the governance of information technology so that it can run more effectively and efficiently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.664c2003B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.664c2003B"><span>Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.</p> <p>2015-12-01</p> <p>During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22462908','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22462908"><span>Thermal effects in the Input Optics of the Enhanced Laser Interferometer Gravitational-Wave Observatory interferometers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dooley, Katherine L; Arain, Muzammil A; Feldbaum, David; Frolov, Valery V; Heintze, Matthew; Hoak, Daniel; Khazanov, Efim A; Lucianetti, Antonio; Martin, Rodica M; Mueller, Guido; Palashov, Oleg; Quetschke, Volker; Reitze, David H; Savage, R L; Tanner, D B; Williams, Luke F; Wu, Wan</p> <p>2012-03-01</p> <p>We present the design and performance of the LIGO Input Optics subsystem as implemented for the sixth science run of the LIGO interferometers. The Initial LIGO Input Optics experienced thermal side effects when operating with 7 W input power. We designed, built, and implemented improved versions of the Input Optics for Enhanced LIGO, an incremental upgrade to the Initial LIGO interferometers, designed to run with 30 W input power. At four times the power of Initial LIGO, the Enhanced LIGO Input Optics demonstrated improved performance including better optical isolation, less thermal drift, minimal thermal lensing, and higher optical efficiency. The success of the Input Optics design fosters confidence for its ability to perform well in Advanced LIGO.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1611133V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1611133V"><span>Integrated assessment of future land use in Brazil under increasing demand for bioenergy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Verstegen, Judith; van der Hilst, Floor; Karssenberg, Derek; Faaij, André</p> <p>2014-05-01</p> <p>Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatiotemporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, four model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) an economic model to project the quantity of change of all land uses, 3) a spatially explicit land use model that determines the location of change of all land uses, and 4) various analysis to determine the impacts of these changes on water, socio-economics, and biodiversity. All four model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2012 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the shape of sugar cane and other land use patches are uncertain, not so much the location. From the economic model we can derive that dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. If the intensity of the livestock sector is not increased future projections show a large loss of natural vegetation. Impacts on water are not that large, except when irrigation is applied on the expanded cropland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5970270-lapack-working-note-installing-testing-initial-release-lapack-unix-non-unix-versions','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5970270-lapack-working-note-installing-testing-initial-release-lapack-unix-non-unix-versions"><span>LAPACK working note No. 10: Installing and testing the initial release of LAPACK Unix and non-Unix versions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Anderson, E.; Dongarra, J.</p> <p>1989-05-01</p> <p>This working note describes how to install and test the initial release of LAPACK. LAPACK is intended to provide a uniform set of subroutines to solve the most common linear algebra problems and to run efficiently on a wide range of architectures. The routines presented at this time are intended not for general distribution, but only for initial testing. We expect the testing to reveal weaknesses in the design, and we plan to modify routines to correct any deficiencies. The instructions for installing, testing, and timing are designed for a person whose responsibility is the maintenance of a mathematical softwaremore » library. This paper provides instructions for Unix users installing a tar tape, and contains instructions for non-Unix users. We assume the installer has experience in compiling and running Fortran programs and in creating object libraries. The installation process involves reading a tape, creating a library from the Fortran source, running the tests, and sending the results to Argonne. 6 refs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930017966','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930017966"><span>Comparing the results of an analytical model of the no-vent fill process with no-vent fill test results for a 4.96 cubic meters (175 cubic feet) tank</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taylor, William J.; Chato, David J.</p> <p>1993-01-01</p> <p>The NASA Lewis Research Center (NASA/LeRC) have been investigating a no-vent fill method for refilling cryogenic storage tanks in low gravity. Analytical modeling based on analyzing the heat transfer of a droplet has successfully represented the process in 0.034 m and 0.142 cubic m commercial dewars using liquid nitrogen and hydrogen. Recently a large tank (4.96 cubic m) was tested with hydrogen. This lightweight tank is representative of spacecraft construction. This paper presents efforts to model the large tank test data. The droplet heat transfer model is found to over predict the tank pressure level when compared to the large tank data. A new model based on equilibrium thermodynamics has been formulated. This new model is compared to the published large scale tank's test results as well as some additional test runs with the same equipment. The results are shown to match the test results within the measurement uncertainty of the test data except for the initial transient wall cooldown where it is conservative (i.e., overpredicts the initial pressure spike found in this time frame).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/8963','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/8963"><span>Implications of random variation in the Stand Prognosis Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>David A. Hamilton</p> <p>1991-01-01</p> <p>Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120003999','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120003999"><span>Effects of Real-Time NASA Vegetation Data on Model Forecasts of Severe Weather</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Case, Jonathan L.; Bell, Jordan R.; LaFontaine, Frank J.; Peters-Lidard, Christa D.</p> <p>2012-01-01</p> <p>The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA-EOS Aqua and Terra satellites. NASA SPoRT started generating daily real-time GVF composites at 1-km resolution over the Continental United States beginning 1 June 2010. A companion poster presentation (Bell et al.) primarily focuses on impact results in an offline configuration of the Noah land surface model (LSM) for the 2010 warm season, comparing the SPoRT/MODIS GVF dataset to the current operational monthly climatology GVF available within the National Centers for Environmental Prediction (NCEP) and Weather Research and Forecasting (WRF) models. This paper/presentation primarily focuses on individual case studies of severe weather events to determine the impacts and possible improvements by using the real-time, high-resolution SPoRT-MODIS GVFs in place of the coarser-resolution NCEP climatological GVFs in model simulations. The NASA-Unified WRF (NU-WRF) modeling system is employed to conduct the sensitivity simulations of individual events. The NU-WRF is an integrated modeling system based on the Advanced Research WRF dynamical core that is designed to represents aerosol, cloud, precipitation, and land processes at satellite-resolved scales in a coupled simulation environment. For this experiment, the coupling between the NASA Land Information System (LIS) and the WRF model is utilized to measure the impacts of the daily SPoRT/MODIS versus the monthly NCEP climatology GVFs. First, a spin-up run of the LIS is integrated for two years using the Noah LSM to ensure that the land surface fields reach an equilibrium state on the 4-km grid mesh used. Next, the spin-up LIS is run in two separate modes beginning on 1 June 2010, one continuing with the climatology GVFs while the other uses the daily SPoRT/MODIS GVFs. Finally, snapshots of the LIS land surface fields are used to initialize two different simulations of the NU-WRF, one running with climatology LIS and GVFs, and the other running with experimental LIS and NASA/SPoRT GVFs. In this paper/presentation, case study results will be highlighted in regions with significant differences in GVF between the NCEP climatology and SPoRT product during severe weather episodes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26673987','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26673987"><span>CHANGES IN PATELLOFEMORAL JOINT STRESS DURING RUNNING WITH THE APPLICATION OF A PREFABRICATED FOOT ORTHOTIC.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Almonroeder, Thomas G; Benson, Lauren C; O'Connor, Kristian M</p> <p>2015-12-01</p> <p>Foot orthotics are commonly utilized in the treatment of patellofemoral pain (PFP) and have shown clinical benefit; however, their mechanism of action remains unclear. Patellofemoral joint stress (PFJS) is thought to be one of the main etiological factors associated with PFP. The primary purpose of this study was to investigate the effects of a prefabricated foot orthotic with 5 ° of medial rearfoot wedging on the magnitude and the timing of the peak PFJS in a group of healthy female recreational athletes. The hypothesis was that there would be significant reduction in the peak patellofemoral joint stress and a delay in the timing of this peak in the orthotic condition. Cross-sectional. Kinematic and kinetic data were collected during running trials in a group of healthy, female recreational athletes. The knee angle and moment data in the sagittal plane were incorporated into a previously developed model to estimate patellofemoral joint stress. The dependent variables of interest were the peak patellofemoral joint stress as well as the percentage of stance at which this peak occurred, as both the magnitude and the timing of the joint loading are thought to be important in overuse running injuries. The peak patellofemoral joint stress significantly increased in the orthotic condition by 5.8% (p=.02, ES=0.24), which does not support the initial hypothesis. However, the orthotic did significantly delay the timing of the peak during the stance phase by 3.8% (p=.002, ES=0.47). The finding that the peak patellofemoral joint stress increased in the orthotic condition did not support the initial hypothesis. However, the finding that the timing of this peak was delayed to later in the stance phase in the orthotic condition did support the initial hypothesis and may be related to the clinical improvements previously reported in subjects with PFP. Level 4.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EOSTr..93..155S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EOSTr..93..155S"><span>White House announces “big data” initiative</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Showstack, Randy</p> <p>2012-04-01</p> <p>The world is now generating zetabytes—which is 10 to the 21st power, or a billion trillion bytess—of information every year, according to John Holdren, director of the White House Office of Science and Technology Policy. With data volumes growing exponentially from a variety of sources such as computers running large-scale models, scientific instruments including telescopes and particle accelerators, and even online retail transactions, a key challenge is to better manage and utilize the data. The Big Data Research and Development Initiative, launched by the White House at a 29 March briefing, initially includes six federal departments and agencies providing more than $200 million in new commitments to improve tools and techniques for better accessing, organizing, and using data for scientific advances. The agencies and departments include the National Science Foundation (NSF), Department of Energy, U.S. Geological Survey (USGS), National Institutes of Health (NIH), Department of Defense, and Defense Advanced Research Projects Agency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.C42B..02D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.C42B..02D"><span>Will sea ice thickness initialisation improve Arctic seasonal-to-interannual forecast skill?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Day, J. J.; Hawkins, E.; Tietsche, S.</p> <p>2014-12-01</p> <p>A number of recent studies have suggested that Arctic sea ice thickness is an important predictor of Arctic sea ice extent. However, coupled forecast systems do not currently use sea ice thickness observations in their initialization and are therefore missing a potentially important source of additional skill. A set of ensemble potential predictability experiments, with a global climate model, initialized with and without knowledge of the sea ice thickness initial state, have been run to investigate this. These experiments show that accurate knowledge of the sea ice thickness field is crucially important for sea ice concentration and extent forecasts up to eight months ahead. Perturbing sea ice thickness also has a significant impact on the forecast error in the 2m temperature and surface pressure fields a few months ahead. These results show that advancing capabilities to observe and assimilate sea ice thickness into coupled forecast systems could significantly increase skill.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.4827K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.4827K"><span>Scalable and balanced dynamic hybrid data assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa</p> <p>2017-04-01</p> <p>Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15222966','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15222966"><span>Responding for sucrose and wheel-running reinforcement: effect of body weight manipulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Belke, Terry W</p> <p>2004-02-27</p> <p>As body weight increases, the excitatory strength of a stimulus signaling an opportunity to run should weaken to a greater degree than that of a stimulus signaling an opportunity to eat. To test this hypothesis, six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. The effect of varying body weight on responding maintained by these two reinforcers was investigated by systematically increasing and decreasing post-session food amounts. The initial body weight was 335 g. Body weights were increased to approximately 445 g and subsequently returned to 335 g. As body weight increased, overall and local lever-pressing rates decreased while post-reinforcement pauses lengthened. Analysis of post-reinforcement pauses and local lever-pressing rates in terms of transitions between successive reinforcers revealed that local response rates in the presence of stimuli signaling upcoming wheel and sucrose reinforcers were similarly affected. However, pausing in the presence of the stimulus signaling a wheel-running reinforcer lengthened to a greater extent than did pausing in the presence of the stimulus signaling sucrose. This result suggests that as body weight approaches ad-lib levels, the likelihood of initiation of responding to obtain an opportunity to run approaches zero and the animal "rejects" the opportunity to run in a manner similar to the rejection of less preferred food items in studies of food selectivity.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18672455','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18672455"><span>SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei</p> <p>2008-10-28</p> <p>Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=consumer+AND+behavior+AND+involvement&id=EJ958172','ERIC'); return false;" href="https://eric.ed.gov/?q=consumer+AND+behavior+AND+involvement&id=EJ958172"><span>How Settings Change People: Applying Behavior Setting Theory to Consumer-Run Organizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Brown, Louis D.; Shepherd, Matthew D.; Wituk, Scott A.; Meissen, Greg</p> <p>2007-01-01</p> <p>Self-help initiatives stand as a classic context for organizational studies in community psychology. Behavior setting theory stands as a classic conception of organizations and the environment. This study explores both, applying behavior setting theory to consumer-run organizations (CROs). Analysis of multiple data sets from all CROs in Kansas…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=281931','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=281931"><span>Spatial application of WEPS for estimating wind erosion in the Pacific Northwest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA460403','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA460403"><span>Extending Orthogonal and Nearly Orthogonal Latin Hypercube Designs for Computer Simulation and Experimentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2006-12-01</p> <p>The code was initially developed to be run within the netBeans IDE 5.04 running J2SE 5.0. During the course of the development, Eclipse SDK 3.2...covers the results from the research. Chapter V concludes and recommends future research. 4 netBeans</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1109412.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1109412.pdf"><span>Children's Conceptual Development: A Long-Run Investigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Saglam, Yilmaz; Ozbek, Merve</p> <p>2016-01-01</p> <p>The study sought to investigate conceptual change process. It is specifically aimed to probe children's initial ideas and how or to what way those ideas alter in the long run. A total of 18 children volunteered and participated in the study. Individual interviews were conducted. The children were asked to define the concept of evaporation, explain…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27870917','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27870917"><span>Combating Rhino Horn Trafficking: The Need to Disrupt Criminal Networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haas, Timothy C; Ferreira, Sam M</p> <p>2016-01-01</p> <p>The onslaught on the World's wildlife continues despite numerous initiatives aimed at curbing it. We build a model that integrates rhino horn trade with rhino population dynamics in order to evaluate the impact of various management policies on rhino sustainability. In our model, an agent-based sub-model of horn trade from the poaching event up through a purchase of rhino horn in Asia impacts rhino abundance. A data-validated, individual-based sub-model of the rhino population of South Africa provides these abundance values. We evaluate policies that consist of different combinations of legal trade initiatives, demand reduction marketing campaigns, increased anti-poaching measures within protected areas, and transnational policing initiatives aimed at disrupting those criminal syndicates engaged in horn trafficking. Simulation runs of our model over the next 35 years produces a sustainable rhino population under only one management policy. This policy includes both a transnational policing effort aimed at dismantling those criminal networks engaged in rhino horn trafficking-coupled with increases in legal economic opportunities for people living next to protected areas where rhinos live. This multi-faceted approach should be the focus of the international debate on strategies to combat the current slaughter of rhino rather than the binary debate about whether rhino horn trade should be legalized. This approach to the evaluation of wildlife management policies may be useful to apply to other species threatened by wildlife trafficking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5117767','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5117767"><span>Combating Rhino Horn Trafficking: The Need to Disrupt Criminal Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Haas, Timothy C.; Ferreira, Sam M.</p> <p>2016-01-01</p> <p>The onslaught on the World’s wildlife continues despite numerous initiatives aimed at curbing it. We build a model that integrates rhino horn trade with rhino population dynamics in order to evaluate the impact of various management policies on rhino sustainability. In our model, an agent-based sub-model of horn trade from the poaching event up through a purchase of rhino horn in Asia impacts rhino abundance. A data-validated, individual-based sub-model of the rhino population of South Africa provides these abundance values. We evaluate policies that consist of different combinations of legal trade initiatives, demand reduction marketing campaigns, increased anti-poaching measures within protected areas, and transnational policing initiatives aimed at disrupting those criminal syndicates engaged in horn trafficking. Simulation runs of our model over the next 35 years produces a sustainable rhino population under only one management policy. This policy includes both a transnational policing effort aimed at dismantling those criminal networks engaged in rhino horn trafficking—coupled with increases in legal economic opportunities for people living next to protected areas where rhinos live. This multi-faceted approach should be the focus of the international debate on strategies to combat the current slaughter of rhino rather than the binary debate about whether rhino horn trade should be legalized. This approach to the evaluation of wildlife management policies may be useful to apply to other species threatened by wildlife trafficking. PMID:27870917</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29857684','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29857684"><span>Local and global dynamics of Ramsey model: From continuous to discrete time.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guzowska, Malgorzata; Michetti, Elisabetta</p> <p>2018-05-01</p> <p>The choice of time as a discrete or continuous variable may radically affect equilibrium stability in an endogenous growth model with durable consumption. In the continuous-time Ramsey model [F. P. Ramsey, Econ. J. 38(152), 543-559 (1928)], the steady state is locally saddle-path stable with monotonic convergence. However, in the discrete-time version, the steady state may be unstable or saddle-path stable with monotonic or oscillatory convergence or periodic solutions [see R.-A. Dana et al., Handbook on Optimal Growth 1 (Springer, 2006) and G. Sorger, Working Paper No. 1505 (2015)]. When this occurs, the discrete-time counterpart of the continuous-time model is not consistent with the initial framework. In order to obtain a discrete-time Ramsey model preserving the main properties of the continuous-time counterpart, we use a general backward and forward discretisation as initially proposed by Bosi and Ragot [Theor. Econ. Lett. 2(1), 10-15 (2012)]. The main result of the study here presented is that, with this hybrid discretisation method, fixed points and local dynamics do not change. For what it concerns global dynamics, i.e., long-run behavior for initial conditions taken on the state space, we mainly perform numerical analysis with the main scope of comparing both qualitative and quantitative evolution of the two systems, also varying some parameters of interest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018Chaos..28e5902G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018Chaos..28e5902G"><span>Local and global dynamics of Ramsey model: From continuous to discrete time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guzowska, Malgorzata; Michetti, Elisabetta</p> <p>2018-05-01</p> <p>The choice of time as a discrete or continuous variable may radically affect equilibrium stability in an endogenous growth model with durable consumption. In the continuous-time Ramsey model [F. P. Ramsey, Econ. J. 38(152), 543-559 (1928)], the steady state is locally saddle-path stable with monotonic convergence. However, in the discrete-time version, the steady state may be unstable or saddle-path stable with monotonic or oscillatory convergence or periodic solutions [see R.-A. Dana et al., Handbook on Optimal Growth 1 (Springer, 2006) and G. Sorger, Working Paper No. 1505 (2015)]. When this occurs, the discrete-time counterpart of the continuous-time model is not consistent with the initial framework. In order to obtain a discrete-time Ramsey model preserving the main properties of the continuous-time counterpart, we use a general backward and forward discretisation as initially proposed by Bosi and Ragot [Theor. Econ. Lett. 2(1), 10-15 (2012)]. The main result of the study here presented is that, with this hybrid discretisation method, fixed points and local dynamics do not change. For what it concerns global dynamics, i.e., long-run behavior for initial conditions taken on the state space, we mainly perform numerical analysis with the main scope of comparing both qualitative and quantitative evolution of the two systems, also varying some parameters of interest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1324560','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1324560"><span>Climate Modeling: Ocean Cavities below Ice Shelves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petersen, Mark Roger</p> <p></p> <p>The Accelerated Climate Model for Energy (ACME), a new initiative by the U.S. Department of Energy, includes unstructured-mesh ocean, land-ice, and sea-ice components using the Model for Prediction Across Scales (MPAS) framework. The ability to run coupled high-resolution global simulations efficiently on large, high-performance computers is a priority for ACME. Sub-ice shelf ocean cavities are a significant new capability in ACME, and will be used to better understand how changing ocean temperature and currents influence glacial melting and retreat. These simulations take advantage of the horizontal variable-resolution mesh and adaptive vertical coordinate in MPAS-Ocean, in order to place high resolutionmore » below ice shelves and near grounding lines.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010108903','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010108903"><span>Aircraft Engine Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Veres, Joseph</p> <p>2001-01-01</p> <p>This report outlines the detailed simulation of Aircraft Turbofan Engine. The objectives were to develop a detailed flow model of a full turbofan engine that runs on parallel workstation clusters overnight and to develop an integrated system of codes for combustor design and analysis to enable significant reduction in design time and cost. The model will initially simulate the 3-D flow in the primary flow path including the flow and chemistry in the combustor, and ultimately result in a multidisciplinary model of the engine. The overnight 3-D simulation capability of the primary flow path in a complete engine will enable significant reduction in the design and development time of gas turbine engines. In addition, the NPSS (Numerical Propulsion System Simulation) multidisciplinary integration and analysis are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://water.usgs.gov/nrp/gwsoftware/sutraprep/sutraprep.html','USGSPUBS'); return false;" href="http://water.usgs.gov/nrp/gwsoftware/sutraprep/sutraprep.html"><span>SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Provost, Alden M.</p> <p>2002-01-01</p> <p>SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMGC42A..02V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMGC42A..02V"><span>Uncertainty assessment of future land use in Brazil under increasing demand for bioenergy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van der Hilst, F.; Verstegen, J. A.; Karssenberg, D.; Faaij, A.</p> <p>2013-12-01</p> <p>Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatio-temporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, three model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) a model to project the quantity of change of all land uses, and 3) a spatially explicit land use model that determines the location of change of all land uses. All three model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2011 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the locations of pastures are uncertain. Since the dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. Results of the data assimilation indicate that the drivers of location of the land uses vary over time (variations up to 50% in the importance of the drivers) making it difficult to find a solid stationary system representation. Overall, we conclude that projection up to 2030 is only of use for quantifying impacts that act on a larger aggregation level, because at local level uncertainty is too large.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918368P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918368P"><span>Daily hydro- and morphodynamic simulations at Duck, NC, USA using Delft3D</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Penko, Allison; Veeramony, Jay; Palmsten, Margaret; Bak, Spicer; Brodie, Katherine; Hesser, Tyler</p> <p>2017-04-01</p> <p>Operational forecasting of the coastal nearshore has wide ranging societal and humanitarian benefits, specifically for the prediction of natural hazards due to extreme storm events. However, understanding the model limitations and uncertainty is as equally important as the predictions themselves. By comparing and contrasting the predictions of multiple high-resolution models in a location with near real-time collection of observations, we are able to perform a vigorous analysis of the model results in order to achieve more robust and certain predictions. In collaboration with the U.S. Army Corps of Engineers Field Research Facility (USACE FRF) as part of the Coastal Model Test Bed (CMTB) project, we have set up Delft3D at Duck, NC, USA to run in near-real time, driven by measured wave data at the boundary. The CMTB at the USACE FRF allows for the unique integration of operational wave, circulation, and morphology models with real-time observations. The FRF has an extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data that is broadcast in near-real time onto a publically accessible server. Wave, current, and bed elevation instruments are permanently installed across the model domain including 2 waverider buoys in 17-m and 26-m water depths at 3.5-km and 17-km offshore, respectively, that record directional wave data every 30-min. Here, we present the workflow and output of the Delft3D hydro- and morphodynamic simulations at Duck, and show the tactical benefits and operational potential of such a system. A nested Delft3D simulation runs a parent grid that extends 12-km in the along-shore and 3.5-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The bathymetry for the parent grid was obtained from a regional digital elevation model (DEM) generated by the Federal Emergency Management Agency (FEMA). The inner nested grid extends 1.8-km in the along-shore and 1-km in the cross-shore with 5-m resolution and a maximum depth of approximately 8-m. The inner nested grid initial model bathymetry is set to either the predicted bathymetry from the previous day's simulation or a survey, whichever is more recent. Delft3D-WAVE runs in the parent grid and is driven with the real-time spectral wave measurements from the waverider buoy in 17-m depth. The spectral output from Delft3D-WAVE in the parent grid is then used as the boundary condition for the inner nested high-resolution grid, in which the coupled Delft3D wave-flow-morphology model is run. The model results are then compared to the wave, current, and bathymetry observations collected at the FRF as well as other models that are run in the CMTB.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4250354','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4250354"><span>Circadian Activity Rhythms and Voluntary Ethanol Intake in Male and Female Ethanol-Preferring Rats: Effects of Long-Term Ethanol Access</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rosenwasser, Alan M.; McCulley, Walter D.; Fecteau, Matthew</p> <p>2014-01-01</p> <p>Chronic alcohol (ethanol) intake alters fundamental properties of the circadian clock. While previous studies have reported significant alterations in free-running circadian period during chronic ethanol access, these effects are typically subtle and appear to require high levels of intake. In the present study we examined the effects of long-term voluntary ethanol intake on ethanol consumption and free-running circadian period in male and female, selectively bred ethanol-preferring P and HAD2 rats. In light of previous reports that intermittent access can result in escalated ethanol intake, an initial 2-week water-only baseline was followed by either continuous or intermittent ethanol access (i.e., alternating 15-day epochs of ethanol access and ethanol deprivation) in separate groups of rats. Thus, animals were exposed to either 135 days of continuous ethanol access or to five 15-day access periods alternating with four 15-day periods of ethanol deprivation. Animals were maintained individually in running-wheel cages under continuous darkness throughout the experiment to allow monitoring of free-running activity and drinking rhythms, and 10% (v/v) ethanol and plain water were available continuously via separate drinking tubes during ethanol access. While there were no initial sex differences in ethanol drinking, ethanol preference increased progressively in male P and HAD2 rats under both continuous and intermittent-access conditions, and eventually exceeded that seen in females. Free-running period shortened during the initial ethanol-access epoch in all groups, but the persistence of this effect showed complex dependence on sex, breeding line, and ethanol-access schedule. Finally, while females of both breeding lines displayed higher levels of locomotor activity than males, there was little evidence for modulation of activity level by ethanol access. These results are consistent with previous findings that chronic ethanol intake alters free-running circadian period, and show further that the development of chronobiological tolerance to ethanol may vary by sex and genotype. PMID:25281289</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H53A1649C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H53A1649C"><span>Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.</p> <p>2015-12-01</p> <p>The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140017462','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140017462"><span>NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stueber, Thomas J.; Paxson, Daniel E.</p> <p>2014-01-01</p> <p>The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032187','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032187"><span>Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.</p> <p>2008-01-01</p> <p>Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUSM.A51C..01G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUSM.A51C..01G"><span>The Good, the Bad, and the Ugly: Numerical Prediction for Hurricane Juan (2003)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gyakum, J.; McTaggart-Cowan, R.</p> <p>2004-05-01</p> <p>The range of accuracy of the numerical weather prediction (NWP) guidance for the landfall of Hurricane Juan (2003), from nearly perfect to nearly useless, motivates a study of the NWP forecast errors on 28-29 September 2003 in the eastern North Atlantic. Although the forecasts issued over the period were of very high quality, this is primarily because of the diligence of the forecasters, and not related to the reliability of the numerical predictions provided to them by the North American operational centers and the research community. A bifurcation in the forecast fields from various centers and institutes occurred beginning with the 0000 UTC run of 28 September, and continuing until landfall just after 0000 UTC on 29 September. The GFS (NCEP), Eta (NCEP), GEM (Canadian Meteorological Centre; CMC), and MC2 (McGill) forecast models all showed an extremely weak (minimum SLP above 1000 hPa) remnant vortex moving north-northwestward into the Gulf of Maine and merging with a diabatically-developed surface low offshore. The GFS uses a vortex-relocation scheme, the Eta a vortex bogus, and the GEM and MC2 are run on CMC analyses that contain no enhanced vortex. The UK Met Office operational, the GFDL, and the NOGAPS (US Navy) forecast models all ran a small-scale hurricane-like vortex directly into Nova Scotia and verified very well for this case. The UKMO model uses synthetic observations to enhance structures in poorly-forecasted areas during the analysis cycle and both the GFDL and NOGAPS model use advanced idealized vortex bogusing in their initial conditions. The quality of the McGill MC2 forecast is found to be significantly enhanced using a bogusing technique similar to that used in the initialization of the successful forecast models. A verification of the improved forecast is presented along with a discussion of the need for operational quality control of the background fields in the analysis cycle and for proper representation of strong, small-scale tropical vortices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26900500','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26900500"><span>THE EFFECT OF STEP RATE MANIPULATION ON FOOT STRIKE PATTERN OF LONG DISTANCE RUNNERS.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allen, Darrell J; Heisler, Hollie; Mooney, Jennifer; Kring, Richard</p> <p>2016-02-01</p> <p>Running gait retraining to change foot strike pattern in runners from a heel strike pattern to a non heel- strike pattern has been shown to reduce impact forces and may help to reduce running related injuries. Step rate manipulation above preferred is known to help decrease step length, foot inclination angle, and vertical mass excursion, but has not yet been evaluated as a method to change foot strike pattern. The purpose of this study was to investigate the effect of step rate manipulation on foot strike pattern in shod recreational runners who run with a heel strike pattern. A secondary purpose was to describe the effect of step rate manipulation at specific percentages above preferred on foot inclination angle at initial contact. Forty volunteer runners, who were self-reported heel strikers and had a weekly running mileage of at least 10 miles, were recruited. Runners were confirmed to be heel strikers during the warm up period on the treadmill. The subject's step rate was determined at their preferred running pace. A metronome was used to increase step rate above the preferred step rate by 5%, 10% and 15%. 2D video motion analysis was utilized to determine foot strike pattern and to measure foot inclination angle at initial contact for each step rate condition. There was a statistically significant change in foot strike pattern from a heel strike pattern to a mid-foot or forefoot strike pattern at both 10% and 15% step rates above preferred. Seven of the 40 subjects (17.5%) changed from a heel- strike pattern to a non- heel strike pattern at +10% and 12 of the 40 subjects (30%) changed to a non-heel strike pattern at +15%. Mean foot inclination angle at initial contact showed a statistically significant change (reduction) as step rate increased. Step rate manipulation of 10% or greater may be enough to change foot strike pattern from a heel strike to a mid-foot or forefoot strike pattern in a small percentage of recreational runners who run in traditional running shoes. If changing the foot strike pattern is the main goal, other gait re-training methods may be needed to make a change from a heel strike to a non-heel strike pattern. Step rate manipulation shows a progressive reduction of foot inclination angle at 5%, 10%, and 15% above preferred step rate which reduces the severity of the heel strike at initial contact. Step rate manipulation of at least +10% above preferred may be an effective running gait retraining method for clinicians to decrease the severity of heel strike and possibly assist a runner to change to a non-heel strike pattern. 3.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.212.1450P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.212.1450P"><span>Profiling the robustness, efficiency and limits of the forward-adjoint method for 3-D mantle convection modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Price, M. G.; Davies, J. H.</p> <p>2018-02-01</p> <p>Knowledge of Earth's past mantle structure is inherently unknown. This lack of knowledge presents problems in many areas of Earth science, including in mantle circulation modelling (MCM). As a mathematical model of mantle convection, MCMs require boundary and initial conditions. While boundary conditions are readily available from sources such as plate reconstructions for the upper surface, and as free slip at the core-mantle boundary, the initial condition is not known. MCMs have historically `created' an initial condition using long `spin up' processes using the oldest available plate reconstruction period available. While these do yield good results when models are run to present day, it is difficult to infer with confidence results from early in a model's history. Techniques to overcome this problem are now being studied in geodynamics, such as by assimilating the known internal structure (e.g. from seismic tomography) of Earth at present day backwards in time. One such method is to use an iterative process known as the forward-adjoint method. While this is an efficient means of solving this inverse problem, it still strains all but the most cutting edge computational systems. In this study we endeavour to profile the effectiveness of this method using synthetic test cases as our known data source. We conclude that savings in terms of computational expense for forward-adjoint models can be achieved by streamlining the time-stepping of the calculation, as well as determining the most efficient method of updating initial conditions in the iterative scheme. Furthermore, we observe that in the models presented, there exists an upper limit on the time interval over which solutions will practically converge, although this limit is likely to be linked to Rayleigh number.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GMD.....7.1641P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GMD.....7.1641P"><span>Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.</p> <p>2014-08-01</p> <p>Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1243063-intercomparison-methods-coupling-between-convection-large-scale-circulation-comparison-over-uniform-surface-conditions','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1243063-intercomparison-methods-coupling-between-convection-large-scale-circulation-comparison-over-uniform-surface-conditions"><span>Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...</p> <p>2015-10-24</p> <p>Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/922996','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/922996"><span>Dose dependent effects of exercise training and detraining ontotal and regional adiposity in 4,663 men and 1,743</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Williams, Paul T.; Thompson, Paul D.</p> <p>2006-01-06</p> <p>Objective: To determine if exercise reduces body weight andto examine the dose-response relationships between changes in exerciseand changes in total and regional adiposity. Methods and Results:Questionnaires on weekly running distance and adiposity from a largeprospective study of 3,973 men and 1,444 women who quit running(detraining), 270 men and 146 women who started running (training) and420 men and 153 women who remained sedentary during 7.4 years offollow-up. There were significant inverse relationships between change inthe amount of vigorous exercise (km/wk run) and changes in weight and BMIin men (slope+-SE:-0.039+-0.005 kg and -0.012+-0.002 kg/m2 per km/wk,respectively) and older women (-0.060+-0.018 kg andmore » -0.022+-0.007 kg/m2per km/wk) who quit running, and in initially sedentary men(-0.098+-0.017 kg and -0.032+-0.005 kg/m2 per km/wk) and women(-0.062+-0.023 kg and -0.021+-0.008 kg/m2 per km/wk) who started running.Changes in waist circumference were also inversely related to changes inrunning distance in men who quit (-0.026+-0.005 cm per km/wk) or startedrunning (-0.078+-0.017 cm per km/wk). Conclusions. The initiation andcessation of vigorous exercise decrease and increase body weight andintra-abdominal fat, respectively, and these changes are proportional tothe change in exercise dose.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.8927S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.8927S"><span>The 2014 Lake Askja rockslide tsunami - optimization of landslide parameters comparing numerical simulations with observed run-up</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn</p> <p>2016-04-01</p> <p>The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run-up. By taking the minimum mean squared error between simulations and observations, a set of best-fit landslide parameters (friction parameters, initial speed and block size) were determined. While we were able to obtain a close fit with observations using the dispersive model, it proved impossible to constrain the landslide parameters to fit the data using a shallow water model. As a consequence, we conclude that in the present case, dispersive effects were crucial in obtaining the correct inundation pattern, and that a shallow water model produced large artificial offsets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1815811B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1815811B"><span>A 3D Global Climate Model of the Pluto atmosphere coupled to a volatile transport model to interpret New Horizons observations, including the N2, CH4 and CO cycles and the formation of organic hazes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bertrand, Tanguy; Forget, Francois</p> <p>2016-04-01</p> <p>To interpret New Horizons observations and simulate the Pluto climate system, we have developed a Global Climate Model (GCM) of Pluto's atmosphere. In addition to a 3D "dynamical core" which solves the equation of meteorology, the model takes into account the N2 condensation and sublimation and its thermal and dynamical effects, the vertical turbulent mixing, the radiative transfer through methane and carbon monoxide, molecular thermal conduction, and a detailed surface thermal model with different thermal inertia for various timescales (diurnal, seasonal). The GCM also includes a detailed model of the CH4 and CO cycles, taking into account their transport by the atmospheric circulation and turbulence, as well as their condensation and sublimation on the surface and in the atmosphere, possibly forming methane ice clouds. The GCM consistently predicts the 3D methane abundance in the atmosphere, which is used as an input for our radiative transfer calculation. In a second phase, we also developed a volatile transport model, derived from the GCM, which can be run over thousands of years in order to reach consistent initial states for the GCM runs and better explore the seasonal processes on Pluto. Results obtained with the volatile transport model show that the distribution of N2, CH4 and CO ices primarily depends on the seasonal thermal inertia used for the different ices, and is affected by the assumed topography as well. As observed, it is possible to form a large and permanent nitrogen glacier with CO and CH4 ice deposits in an equatorial basin corresponding to Sputnik Planum, while having a surface pressure evolution consistent with stellar occultations and New Horizons data. In addition, most of the methane ice is sequestered with N2 ice in the basin but seasonal polar caps of CH4 frosts also form explaining the bright polar caps observed with Hubble in the 1980s and in line with New Horizons observations. Using such balanced combination of surface and subsurface conditions as initial conditions, we run the GCM from 1975 to 2015, so that the model become insensitive to the assumed atmospheric initial states (that are not constrained by the volatile transport model). The simulated thermal structure and waves can be compared to the New Horizons occultations measurements. As observed, the horizontal variability is very limited, for fundamental reasons. In addition, we have developed a 3D model of the formation of organic hazes within the GCM. It includes the different steps of aerosols formation as understood on Titan: photolysis of CH4 in the upper atmosphere by the Lyman-alpha radiation, production of various gaseous precursor species, conversion into solid particles through chemistry and aggregation processes, and gravitational sedimentation. Significant amount of haze particles are found to be present at all latitudes up to 100 km. However, if N2 ice is already condensing in the polar night, the majority of the haze particles tend to accumulate in the polar night because of the transport of the haze precursors and aerosols by the condensation flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.......141C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.......141C"><span>Friction and Environmental Sensitivity of Molybdenum Disulfide: Effects of Microstructure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Curry, John F.</p> <p></p> <p>For nearly a century, molybdenum disulfide has been employed as a solid lubricant to reduce the friction and wear between surfaces. MoS2 is in a class of unique materials, transition metal dichalcogens (TMDC), that have a single crystal structure forming lamellae that interact via weak van der Waals forces. This dissertation focuses on the link between the microstructure of MoS2 and the energetics of running film formation to reduce friction, and effects of environmental sensitivities on performance. Nitrogen impinged MoS2 films are utilized as a comparator to amorphous PVD deposited MoS2 in many of the studies due to the highly ordered surface parallel basal texture of sprayed films. Comparisons showed that films with a highly ordered structure can reduce high friction behavior during run-in. It is thought that shear induced reorientation of amorphous films contributes to typically high initial friction during run-in. In addition to a reduction in initial friction, highly ordered MoS2 films are shown to be more resistant to penetration from oxidative aging processes. High sensitivity, low-energy ion scattering (HS-LEIS) enabled depth profiles that showed oxidation limited to the first monolayer for ordered films and throughout the depth (4-5 nm) for amorphous films. X-ray photoelectron spectroscopy supported these findings, showing far more oxidation in amorphous films than ordered films. Many of these results show the benefits of a well run-in coating, yet transient increases in initial friction can still be noticed after only 5 - 10 minutes. It was found that the transient return to high initial friction after dwell times past 5 - 10 minutes was not due to adsorbed species such as water, but possibly an effect of basal plane relaxation to a commensurate state. Additional techniques and methods were developed to study the effect of adsorbed water and load on running film formation via spiral orbit XRD studies. Spiral orbit experiments enabled large enough worn areas for study in the XRD. Diffraction patterns for sputtered coatings at high loads (1N) showed more intense signals for surface parallel basal plane representation than lower loads (100mN). Tests run in dry and humid nitrogen (20% RH), however, showed no differences in reorientation of basal planes. Microstructure was found to be an important factor in determining the tribological performance of MoS2 films in a variety of testing conditions and environments. These findings will be useful in developing a mechanistic framework that better understands the energetics of running film formation and how different environments play a role.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=336363&keyword=health&subject=health%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=07/14/2012&dateendpublishedpresented=07/14/2017&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=336363&keyword=health&subject=health%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=07/14/2012&dateendpublishedpresented=07/14/2017&sortby=pubdateyear"><span>A Nested Nearshore Nutrient Model (N&Sup3;M) for ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Nearshore conditions drive phenomena like harmful algal blooms (HABs), and the nearshore and coastal margin are the parts of the Great Lakes most used by humans. To assess conditions, optimize monitoring, and evaluate management options, a model of nearshore nutrient transport and algal dynamics is being developed. The model targets a “regional” spatial scale, similar to the Great Lakes Aquatic Habitat Framework's sub-basins, which divide the nearshore into 30 regions. Model runs are 365 days, a whole season temporal scale, reporting at 3 hour intervals. N³M uses output from existing hydrodynamic models and simple transport kinetics. The nutrient transport component of this model is largely complete, and is being tested with various hydrodynamic data sets. The first test case covers a 200 km² area between two major tributaries to Lake Michigan, the Grand and Muskegon. N³M currently simulates phosphorous and chloride, selected for their distinct in-lake transport dynamics; nitrogen will be added. Initial results for 2003, 2010, and 2015 show encouraging correlations with field measurements. Initially implemented in MatLab, the model is currently implemented in Python and leverages multi-processor computation. The 4D in-browser visualizer Cesium is used to view model output, time varying satellite imagery, and field observations. not applicable</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21569779','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21569779"><span>Does a crouched leg posture enhance running stability and robustness?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre</p> <p>2011-07-21</p> <p>Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height. Copyright © 2011 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100026666','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100026666"><span>P.88 Regional Precipitation Forecast with Atmospheric Infrared Sounder (AIRS) Profiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Shih-Hung; Zavodsky, Bradley; Jedlovec, Gary</p> <p>2010-01-01</p> <p>Prudent assimulation of AIRS thermodynamic profiles and quality indicators can improve initial conditions for regional weather models. In general, AIRS-enhanced analysis more closely resembles radiosondes than the CNTL; forecasts with AIRS profiles are generally closer to NAM analyses than CNTL for sensible weather parameters (not shown here). Assimilation of AIRS leads to an overall QPF improvement in 6-h accumulated precipitation forecases. Including AIRS profiles in assimilation process enhances the low-level instability and produces stronger updrafts and a better precipitation forecast than the CNTL run.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810016146','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810016146"><span>The thermal influence of continents on a model-generated January climate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spar, J.; Cohen, C.; Wu, P.</p> <p>1981-01-01</p> <p>Two climate simulations were compared. Both climate computations were initialized with the same horizontally uniform state of rest. However, one is carried out on a water planet (without continents), while the second is repeated on a planet with geographically realistic but flat (sea level) continents. The continents in this experiment have a uniform albedo of 0.14, except where snow accumulates, a uniform roughness height of 0.3 m, and zero water storage capacity. Both runs were carried out for a 'perpetual January' with solar declination fixed at January 15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830008909','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830008909"><span>Solution of nonlinear multivariable constrained systems using a gradient projection digital algorithm that is insensitive to the initial state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hargrove, A.</p> <p>1982-01-01</p> <p>Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1035324','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1035324"><span>Volume 2: Compendium of Abstracts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2017-06-01</p> <p>simulation work using a standard running model for legged systems, the Spring Loaded Inverted Pendulum (SLIP) Model. In this model, the dynamics of a single...bar SLIP model is analyzed using a basin of attraction analyses to determine the optimal configuration for running at different velocities and...acquisition, and the automatic target acquisition were then compared to each other. After running trials with the current system, it will be</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=drosophila&pg=3&id=EJ1059496','ERIC'); return false;" href="https://eric.ed.gov/?q=drosophila&pg=3&id=EJ1059496"><span>The Impact of Odor--Reward Memory on Chemotaxis in Larval "Drosophila"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Schleyer, Michael; Reid, Samuel F.; Pamir, Evren; Saumweber, Timo; Paisios, Emmanouil; Davies, Alexander; Gerber, Bertram; Louis, Matthieu</p> <p>2015-01-01</p> <p>How do animals adaptively integrate innate with learned behavioral tendencies? We tackle this question using chemotaxis as a paradigm. Chemotaxis in the "Drosophila" larva largely results from a sequence of runs and oriented turns. Thus, the larvae minimally need to determine (i) how fast to run, (ii) when to initiate a turn, and (iii)…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=dentistry&pg=4&id=EJ991032','ERIC'); return false;" href="https://eric.ed.gov/?q=dentistry&pg=4&id=EJ991032"><span>Learning Surgically Oriented Anatomy in a Student-Run Extracurricular Club: An Education through Recreation Initiative</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ullah, Shahnoor M.; Bodrogi, Andrew; Cristea, Octav; Johnson, Marjorie; McAlister, Vivian C.</p> <p>2012-01-01</p> <p>Didactic and laboratory anatomical education have seen significant reductions in the medical school curriculum due, in part, to the current shift from basic science to more clinically based teaching in North American medical schools. In order to increase medical student exposure to anatomy, with clinical applicability, a student-run initiative…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Physical+AND+activity+AND+level%2c+AND+genders&pg=7&id=EJ1059766','ERIC'); return false;" href="https://eric.ed.gov/?q=Physical+AND+activity+AND+level%2c+AND+genders&pg=7&id=EJ1059766"><span>Healthy Living Initiative: Running/Walking Club</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stylianou, Michalis; Kulinna, Pamela Hodges; Kloeppel, Tiffany</p> <p>2014-01-01</p> <p>This study was grounded in the public health literature and the call for schools to serve as physical activity intervention sites. Its purpose was twofold: (a) to examine the daily distance covered by students in a before-school running/walking club throughout 1 school year and (b) to gain insights on the teachers perspectives of the club.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-06-19/pdf/2013-14569.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-06-19/pdf/2013-14569.pdf"><span>78 FR 36691 - Airworthiness Directives; Piaggio Aero Industries S.p.A Airplanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-06-19</p> <p>... that the cracks were initiated by an unforeseen friction in the MLG wheel lever sub-assembly. This... in loss of control of the aeroplane during take-off or landing runs. To address this potential unsafe... loss of control during take-off or landing runs. (f) Actions and Compliance Unless already done, do the...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016CQGra..33k5008Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016CQGra..33k5008Z"><span>On the running of the spectral index to all orders: a new model-dependent approach to constrain inflationary models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zarei, Moslem</p> <p>2016-06-01</p> <p>In conventional model-independent approaches, the power spectrum of primordial perturbations is characterized by such free parameters as the spectral index, its running, the running of running, and the tensor-to-scalar ratio. In this work we show that, at least for simple inflationary potentials, one can find the primordial scalar and tensor power spectra exactly by resumming over all the running terms. In this model-dependent method, we expand the power spectra about the pivot scale to find the series terms as functions of the e-folding number for some single field models of inflation. Interestingly, for the viable models studied here, one can sum over all the terms and evaluate the exact form of the power spectra. This in turn gives more accurate parametrization of the specific models studied in this work. We finally compare our results with recent cosmic microwave background data to find that our new power spectra are in good agreement with the data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997JGR...10215967E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997JGR...10215967E"><span>Variational data assimilation for tropospheric chemistry modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elbern, Hendrik; Schmidt, Hauke; Ebel, Adolf</p> <p>1997-07-01</p> <p>The method of variational adjoint data assimilation has been applied to assimilate chemistry observations into a comprehensive tropospheric gas phase model. The rationale of this method is to find the correct initial values for a subsequent atmospheric chemistry model run when observations scattered in time are available. The variational adjoint technique is esteemed to be a promising tool for future advanced meteorological forecasting. The stimulating experience gained with the application of four-dimensional variational data assimilation in this research area has motivated the attempt to apply the technique to air quality modeling and analysis of the chemical state of the atmosphere. The present study describes the development and application of the adjoint of the second-generation regional acid deposition model gas phase mechanism, which is used in the European air pollution dispersion model system. Performance results of the assimilation scheme using both model-generated data and real observations are presented for tropospheric conditions. In the former case it is demonstrated that time series of only few or even one measured key species convey sufficient information to improve considerably the analysis of unobserved species which are directly coupled with the observed species. In the latter case a Lagrangian approach is adopted where trajectory calculations between two comprehensively furnished measurement sites are carried out. The method allows us to analyze initial data for air pollution modeling even when only sparse observations are available. Besides remarkable improvements of the model performance by properly analyzed initial concentrations, it is shown that the adjoint algorithm offers the feasibility to estimate the sensitivity of ozone concentrations relative to its precursors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017A%26A...598A.116V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017A%26A...598A.116V"><span>A grid of one-dimensional low-mass star formation collapse models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vaytet, N.; Haugbølle, T.</p> <p>2017-02-01</p> <p>Context. Numerical simulations of star formation are becoming ever more sophisticated, incorporating new physical processes in increasingly realistic set-ups. These models are being compared to the latest observations through state-of-the-art synthetic renderings that trace the different chemical species present in the protostellar systems. The chemical evolution of the interstellar and protostellar matter is very topical, with more and more chemical databases and reaction solvers available online to the community. Aims: The current study was developed to provide a database of relatively simple numerical simulations of protostellar collapse as a template library for observations of cores and very young protostars, and for researchers who wish to test their chemical modelling under dynamic astrophysical conditions. It was also designed to identify statistical trends that may appear when running many models of the formation of low-mass stars by varying the initial conditions. Methods: A large set of 143 calculations of the gravitational collapse of an isolated sphere of gas with uniform temperature and a Bonnor-Ebert-like density profile was undertaken using a 1D fully implicit Lagrangian radiation hydrodynamics code. The parameter space covered initial masses from 0.2 to 8 M⊙, temperatures of 5-30 K, and radii 3000 ≤ R0 ≤ 30 000 AU. Results: A spread due to differing initial conditions and optical depths, was found in the thermal evolutionary tracks of the runs. Within less than an order of magnitude, all first and second Larson cores had masses and radii essentially independent of the initial conditions. Radial profiles of the gas density, velocity, and temperature were found to vary much more outside of the first core than inside. The time elapsed between the formation of the first and second cores was found to strongly depend on the first core mass accretion rate, and no first core in our grid of models lived for longer than 2000 years before the onset of the second collapse. Conclusions: The end product of a protostellar cloud collapse, the second Larson core, is at birth a canonical object with a mass and radius of about 3 MJ and 8 RJ, independent of its initial conditions. The evolution sequence which brings the gas to stellar densities can, however, proceed in a variety of scenarios, on different timescales or along different isentropes, but each story line can largely be predicted by the initial conditions. All the data from the simulations are publicly available. The figures and raw data for every simulation output can be found at this address: http://starformation.hpc.ku.dk/grid-of-protostars. Copies of the outputs, as well as Table C.1, are also available in the form of static electronic tables at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A116</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140013334','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140013334"><span>Simulations and Visualizations of Hurricane Sandy (2012) as Revealed by the NASA CAMVis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shen, Bo-Wen</p> <p>2013-01-01</p> <p>Storm Sandy first appeared as a tropical storm in the southern Caribbean Sea on Oct. 22, 2012, moved northeastward, turned northwestward, and made landfall near Brigantine, New Jersey in late October. Sandy devastated surrounding areas, caused an estimated damage of $50 billion, and became the second costliest tropical cyclone (TC) in U.S. History surpassed only by Hurricane Katrina (2005). To save lives and mitigate economic damage, a central question to be addressed is to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012; Kerr 2012). In this study, we present 10 numerical experiments initialized at 00 and 1200 UTC Oct. 22-26, 2012, with the NASA coupled advanced global modeling and visualization systems (CAMVis). All of the predictions realistically capture Sandy's movement with the northwestward turn prior to its landfall. However, three experiments (initialized at 0000 UTC Oct. 22 and 24 and 1200 UTC Oct. 22) produce larger errors. Among the 10 experiments, the control run initialized at 0000 UTC Oct. 23 produces a remarkable 7-day forecast. To illustrate the impact of environmental flows on the predictability of Sandy, we produce and discuss four-dimensional (4-D) visualizations with the control run. 4-D visualizations clearly demonstrate the following multiscale processes that led to the sinuous track of Sandy: the initial steering impact of an upper-level trough (appearing over the northwestern Caribbean Sea and Gulf of Mexico), the blocking impact of systems to the northeast of Sandy, and the binary interaction with a mid-latitude, upper-level trough that appeared at 130degrees west longitude on Oct. 23, moved to the East Coast and intensified during the period of Oct. 29-30 prior to Sandy's landfall.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28040586','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28040586"><span>Simultaneous arsenic and fluoride removal from synthetic and real groundwater by electrocoagulation process: Parametric and cost evaluation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thakur, Lokendra Singh; Mondal, Prasenjit</p> <p>2017-04-01</p> <p>Co-existence of arsenic and fluoride in groundwater has raised severe health issues to living being. Thus, the present research has been conducted for simultaneous removal of arsenic and fluoride from synthetic groundwater by using electrocoagulation process with aluminum electrode. Effects of initial pH, current density, run time, inter electrode distance and NaCl concentration over percentage removal of arsenic and fluoride as well as operating cost have been studied. The optimum experimental conditions are found to be initial pH: 7, current density: 10 A/m 2 , run time: 95 min, inter electrode distance: 1 cm, NaCl concentration: 0.71 g/l for removal of 98.51% arsenic (initial concentration: 550 μg/l) and 88.33% fluoride (initial concentration: 12 mg/l). The concentration of arsenic and fluoride in treated water are found to be 8.19 μg/l and 1.4 mg/l, respectively, with an operating cost of 0.357 USD/m 3 treated water. Pseudo first and second order kinetic model of individual and simultaneous arsenic and fluoride removal in electrocoagulation have also been studied. Produced sludge characterization studies also confirm the presence of arsenic in As(III) form, and fluoride in sludge. The present electrocoagulation process is able to reduce the arsenic and fluoride concentration of synthetic as well as real groundwater to below 10 μg/l and 1.5 mg/l, respectively, which are maximum contaminant level of these elements in drinking water according to WHO guidelines. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020047051','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020047051"><span>Chemistry-Climate Interactions in the Goddard Institute for Space Studies General Circulation Model. 2; New Insights into Modeling the Pre-Industrial Atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Grenfell, J. Lee; Shindell, D. T.; Koch, D.; Rind, D.; Hansen, James E. (Technical Monitor)</p> <p>2002-01-01</p> <p>We investigate the chemical (hydroxyl and ozone) and dynamical response to changing from present day to pre-industrial conditions in the Goddard Institute for Space Studies General Circulation Model (GISS GMC). We identify three main improvements not included by many other works. Firstly, our model includes interactive cloud calculations. Secondly we reduce sulfate aerosol which impacts NOx partitioning hence Ox distributions. Thirdly we reduce sea surface temperatures and increase ocean ice coverage which impact water vapor and ground albedo respectively. Changing the ocean data (hence water vapor and ozone) produces a potentially important feedback between the Hadley circulation and convective cloud cover. Our present day run (run 1, control run) global mean OH value was 9.8 x 10(exp 5) molecules/cc. For our best estimate of pre-industrial conditions run (run 2) which featured modified chemical emissions, sulfate aerosol and sea surface temperatures/ocean ice, this value changed to 10.2 x 10(exp 5) molecules/cc. Reducing only the chemical emissions to pre-industrial levels in run 1 (run 3) resulted in this value increasing to 10.6 x 10(exp 5) molecules/cc. Reducing the sulfate in run 3 to pre-industrial levels (run 4) resulted in a small increase in global mean OH (10.7 x 10(exp 5) molecules/cc). Changing the ocean data in run 4 to pre-industrial levels (run 5) led to a reduction in this value to 10.3 x 10(exp 5) molecules/cc. Mean tropospheric ozone burdens were 262, 181, 180, 180, and 182 Tg for runs 1-5 respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GeCoA.217..334W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GeCoA.217..334W"><span>Fe(III):S(-II) concentration ratio controls the pathway and the kinetics of pyrite formation during sulfidation of ferric hydroxides</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wan, Moli; Schröder, Christian; Peiffer, Stefan</p> <p>2017-11-01</p> <p>The formation of pyrite has been extensively studied because of its abundance in many anoxic environments. Yet, there is no consensus on the underlying pathways and kinetics of its formation. We studied the formation of pyrite during the reaction between reactive ferric hydroxides (goethite and lepidocrocite) and aqueous sulfide in an anoxic glove box at neutral pH. The formation of pyrite was monitored with Mössbauer spectroscopy using 57Fe isotope-enriched ferric (hydr)oxides. The initial molar ratios of Fe(III):S(-II) were adjusted to be 'high' with Fe(III) concentrations in excess of sulfide (HR) and 'low' (LR) with excess of sulfide. Approximately the same surface area was applied in all HR runs in order to compare the mineral reactivity of ferric hydroxides. Electron transfer between aqueous sulfide and ferric hydroxides in the first 2 h led to the formation of ferrous iron and methanol-extractable oxidized sulfur (MES). Metastable FeSx formed in all of the experiments. Pyrite formed at a different rate in HR and LR runs although the MES and ferrous iron concentrations were rather similar. In all HR runs, pyrite formation started after 48 h and achieved a maximum concentration after 1 week. In contrast, pyrite started to form only after 2 months in LR runs (Fe(III):S(-II) ∼ 0.2) with goethite and no pyrite formation was observed in LR with lepidocrocite after 6 months. Rates in LR runs were at least 2-3 orders of magnitude slower than in HR runs. Sulfide oxidation rates were higher with lepidocrocite than with goethite, but no influence of the mineral type on pyrite formation rates in HR runs could be observed. Pyrite formation rates in HR runs could not be predicted by the classical model of Rickard (1975). We therefore propose a novel ferric-hydroxide-surface (FHS) pathway for rapid pyrite formation that is based on the formation of a precursor species >FeIIS2-. Its formation is competitive to FeSx precipitation at high aqueous sulfide concentrations and requires that a fraction of the ferric hydroxide surface not be covered by a surface precipitate of FeSx. Hence, pyrite formation rate decreases with decreasing Fe(III):S(-II)aq ratio. In LR runs, pyrite formation appears to follow the model of Rickard (1975) and to be kinetically controlled by the dissolution of FeS. The FHS-pathway will be prominent in many aquatic systems with terrestrial influence, i.e. abundance of ferric iron. We propose that the Fe(III):S(-II)aq ratio can be used as an indicator for rapid pyrite formation during early diagenesis in anoxic/suboxic aquatic systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12840638','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12840638"><span>Ground reaction forces and kinematics in distance running in older-aged men.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bus, Sicco A</p> <p>2003-07-01</p> <p>The biomechanics of distance running has not been studied before in older-aged runners but may be different than in younger-aged runners because of musculoskeletal degeneration at older age. This study aimed at determining whether the stance phase kinematics and ground reaction forces in running are different between younger- and older-aged men. Lower-extremity kinematics using three-dimensional motion analysis and ground reaction forces (GRF) using a force plate were assessed in 16 older-aged (55-65 yr) and 13 younger-aged (20-35 yr) well-trained male distance runners running at a self-selected (SRS) and a controlled (CRS) speed of 3.3 m.s-1. The older subjects ran at significantly lower self-selected speeds than the younger subjects (mean 3.34 vs 3.77 m.s-1). In both speed conditions, the older runners exhibited significantly more knee flexion at heel strike and significantly less knee flexion and extension range of motion. No age group differences were present in subtalar joint motion. Impact peak force (1.91 vs 1.70 BW) and maximal initial loading rate (107.5 vs 85.5 BW.s-1) were significantly higher in the older runners at the CRS. Maximal peak vertical and anteroposterior forces and impulses were significantly lower in the older runners at the SRS. The biomechanics of running is different between older- and younger-aged runners on several relevant parameters. The larger impact peak force and initial loading rate indicate a loss of shock-absorbing capacity in the older runners. This may increase their susceptibility to lower-extremity overuse injuries. Moreover, it emphasizes the focus on optimizing cushioning properties in the design and prescription of running shoes and suggests that older-aged runners should be cautious with running under conditions of high impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4447761','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4447761"><span>Effect of training in minimalist footwear on oxygen consumption during walking and running</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Judge, LW</p> <p>2015-01-01</p> <p>The present study sought to examine the effect of 5 weeks of training with minimalist footwear on oxygen consumption during walking and running. Thirteen college-aged students (male n = 7, female n = 6, age: 21.7±1.4 years, height: 168.9±8.8 cm, weight: 70.4±15.8 kg, VO2max: 46.6±6.6 ml·kg−1·min−1) participated in the present investigation. The participants did not have experience with minimalist footwear. Participants underwent metabolic testing during walking (5.6 km·hr−1), light running (7.2 km·hr−1), and moderate running (9.6 km·hr−1). The participants completed this assessment barefoot, in running shoes, and in minimalist footwear in a randomized order. The participants underwent 5 weeks of training with the minimalist footwear. Afterwards, participants repeated the metabolic testing. Data was analyzed via repeated measures ANOVA. The analysis revealed a significant (F4,32= 7.576, ηp2=0.408, p ≤ 0.001) interaction effect (time × treatment × speed). During the initial assessment, the minimalist footwear condition resulted in greater oxygen consumption at 9.6 km·hr−1 (p ≤ 0.05) compared to the barefoot condition, while the running shoe condition resulted in greater oxygen consumption than both the barefoot and minimalist condition at 7.2 and 9.6 km·hr−1. At post-testing the minimalist footwear was not different at any speed compared to the barefoot condition (p> 0.12). This study suggests that initially minimalist footwear results in greater oxygen consumption than running barefoot, however; with utilization the oxygen consumption becomes similar. PMID:26060339</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26060339','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26060339"><span>Effect of training in minimalist footwear on oxygen consumption during walking and running.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bellar, D; Judge, L W</p> <p>2015-06-01</p> <p>The present study sought to examine the effect of 5 weeks of training with minimalist footwear on oxygen consumption during walking and running. Thirteen college-aged students (male n = 7, female n = 6, age: 21.7±1.4 years, height: 168.9±8.8 cm, weight: 70.4±15.8 kg, VO2max: 46.6±6.6 ml·kg(-1)·min(-1)) participated in the present investigation. The participants did not have experience with minimalist footwear. Participants underwent metabolic testing during walking (5.6 km·hr(-1)), light running (7.2 km·hr(-1)), and moderate running (9.6 km·hr(-1)). The participants completed this assessment barefoot, in running shoes, and in minimalist footwear in a randomized order. The participants underwent 5 weeks of training with the minimalist footwear. Afterwards, participants repeated the metabolic testing. Data was analyzed via repeated measures ANOVA. The analysis revealed a significant (F4,32= 7.576, [Formula: see text]=0.408, p ≤ 0.001) interaction effect (time × treatment × speed). During the initial assessment, the minimalist footwear condition resulted in greater oxygen consumption at 9.6 km·hr(-1) (p ≤ 0.05) compared to the barefoot condition, while the running shoe condition resulted in greater oxygen consumption than both the barefoot and minimalist condition at 7.2 and 9.6 km·hr(-1). At post-testing the minimalist footwear was not different at any speed compared to the barefoot condition (p> 0.12). This study suggests that initially minimalist footwear results in greater oxygen consumption than running barefoot, however; with utilization the oxygen consumption becomes similar.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25125392','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25125392"><span>Cumulative dietary exposure to a selected group of pesticides of the triazole group in different European countries according to the EFSA guidance on probabilistic modelling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Boon, Polly E; van Donkersgoed, Gerda; Christodoulou, Despo; Crépet, Amélie; D'Addezio, Laura; Desvignes, Virginie; Ericsson, Bengt-Göran; Galimberti, Francesco; Ioannou-Kakouri, Eleni; Jensen, Bodil Hamborg; Rehurkova, Irena; Rety, Josselin; Ruprich, Jiri; Sand, Salomon; Stephenson, Claire; Strömberg, Anita; Turrini, Aida; van der Voet, Hilko; Ziegler, Popi; Hamey, Paul; van Klaveren, Jacob D</p> <p>2015-05-01</p> <p>The practicality was examined of performing a cumulative dietary exposure assessment according to the requirements of the EFSA guidance on probabilistic modelling. For this the acute and chronic cumulative exposure to triazole pesticides was estimated using national food consumption and monitoring data of eight European countries. Both the acute and chronic cumulative dietary exposures were calculated according to two model runs (optimistic and pessimistic) as recommended in the EFSA guidance. The exposures obtained with these model runs differed substantially for all countries, with the highest exposures obtained with the pessimistic model run. In this model run, animal commodities including cattle milk and different meat types, entered in the exposure calculations at the level of the maximum residue limit (MRL), contributed most to the exposure. We conclude that application of the optimistic model run on a routine basis for cumulative assessments is feasible. The pessimistic model run is laborious and the exposure results could be too far from reality. More experience with this approach is needed to stimulate the discussion of the feasibility of all the requirements, especially the inclusion of MRLs of animal commodities which seem to result in unrealistic conclusions regarding their contribution to the dietary exposure. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018Sci...360..474P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018Sci...360..474P"><span>DOE unveils climate model in advance of global test</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Popkin, Gabriel</p> <p>2018-05-01</p> <p>The world's growing collection of climate models has a high-profile new entry. Last week, after nearly 4 years of work, the U.S. Department of Energy (DOE) released computer code and initial results from an ambitious effort to simulate the Earth system. The new model is tailored to run on future supercomputers and designed to forecast not just how climate will change, but also how those changes might stress energy infrastructure. Results from an upcoming comparison of global models may show how well the new entrant works. But so far it is getting a mixed reception, with some questioning the need for another model and others saying the $80 million effort has yet to improve predictions of the future climate. Even the project's chief scientist, Ruby Leung of the Pacific Northwest National Laboratory in Richland, Washington, acknowledges that the model is not yet a leader.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850006195','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850006195"><span>Predictions of Cockpit Simulator Experimental Outcome Using System Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sorensen, J. A.; Goka, T.</p> <p>1984-01-01</p> <p>This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3673159','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3673159"><span>Joint kinematics and kinetics of overground accelerated running versus running on an accelerated treadmill</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Van Caekenberghe, Ine; Segers, Veerle; Aerts, Peter; Willems, Patrick; De Clercq, Dirk</p> <p>2013-01-01</p> <p>Literature shows that running on an accelerated motorized treadmill is mechanically different from accelerated running overground. Overground, the subject has to enlarge the net anterior–posterior force impulse proportional to acceleration in order to overcome linear whole body inertia, whereas on a treadmill, this force impulse remains zero, regardless of belt acceleration. Therefore, it can be expected that changes in kinematics and joint kinetics of the human body also are proportional to acceleration overground, whereas no changes according to belt acceleration are expected on a treadmill. This study documents kinematics and joint kinetics of accelerated running overground and running on an accelerated motorized treadmill belt for 10 young healthy subjects. When accelerating overground, ground reaction forces are characterized by less braking and more propulsion, generating a more forward-oriented ground reaction force vector and a more forwardly inclined body compared with steady-state running. This change in body orientation as such is partly responsible for the changed force direction. Besides this, more pronounced hip and knee flexion at initial contact, a larger hip extension velocity, smaller knee flexion velocity and smaller initial plantarflexion velocity are associated with less braking. A larger knee extension and plantarflexion velocity result in larger propulsion. Altogether, during stance, joint moments are not significantly influenced by acceleration overground. Therefore, we suggest that the overall behaviour of the musculoskeletal system (in terms of kinematics and joint moments) during acceleration at a certain speed remains essentially identical to steady-state running at the same speed, yet acting in a different orientation. However, because acceleration implies extra mechanical work to increase the running speed, muscular effort done (in terms of power output) must be larger. This is confirmed by larger joint power generation at the level of the hip and lower power absorption at the knee as the result of subtle differences in joint velocity. On a treadmill, ground reaction forces are not influenced by acceleration and, compared with overground, virtually no kinesiological adaptations to an accelerating belt are observed. Consequently, adaptations to acceleration during running differ from treadmill to overground and should be studied in the condition of interest. PMID:23676896</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1610305L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1610305L"><span>Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani</p> <p>2014-05-01</p> <p>Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170005747','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170005747"><span>Comparison of Taxi Time Prediction Performance Using Different Taxi Speed Decision Trees</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, Hanbong</p> <p>2017-01-01</p> <p>In the STBO modeler and tactical surface scheduler for ATD-2 project, taxi speed decision trees are used to calculate the unimpeded taxi times of flights taxiing on the airport surface. The initial taxi speed values in these decision trees did not show good prediction accuracy of taxi times. Using the more recent, reliable surveillance data, new taxi speed values in ramp area and movement area were computed. Before integrating these values into the STBO system, we performed test runs using live data from Charlotte airport, with different taxi speed settings: 1) initial taxi speed values and 2) new ones. Taxi time prediction performance was evaluated by comparing various metrics. The results show that the new taxi speed decision trees can calculate the unimpeded taxi-out times more accurately.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820015815','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820015815"><span>On the wake of a Darrieus turbine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Base, T. E.; Phillips, P.; Robertson, G.; Nowak, E. S.</p> <p>1981-01-01</p> <p>The theory and experimental measurements on the aerodynamic decay of a wake from high performance vertical axis wind turbine are discussed. In the initial experimental study, the wake downstream of a model Darrieus rotor, 28 cm diameter and a height of 45.5 cm, was measured in a Boundary Layer Wind Tunnel. The wind turbine was run at the design tip speed ratio of 5.5. It was found that the wake decayed at a slower rate with distance downstream of the turbine, than a wake from a screen with similar troposkein shape and drag force characteristics as the Darrieus rotor. The initial wind tunnel results indicated that the vertical axis wind turbines should be spaced at least forty diameters apart to avoid mutual power depreciation greater than ten per cent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19830033191&hterms=water+filters&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dwater%2Bfilters','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19830033191&hterms=water+filters&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dwater%2Bfilters"><span>Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cohn, S.; Isaacson, E.; Ghil, M.</p> <p>1981-01-01</p> <p>The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8341J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8341J"><span>A comparison of daily precipitation metrics downscaled using SDSM and WRF + WRFDA models over the Iberian Peninsula.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>José González-Rojí, Santos; Wilby, Robert L.; Sáenz, Jon; Ibarra-Berastegi, Gabriel</p> <p>2017-04-01</p> <p>Downscaling via the Statistical DownScaling Model (SDSM) version 5.2 and two different configurations of the dynamical WRF model (with and without 3DVAR data assimilation) was evaluated for the estimation of daily precipitation over 21 sites across the Iberian Peninsula during the period 2010-2014. Six different strategies were used to calibrate the SDSM model. These options cover (1) use of NCEP/NCAR R1 Reanalysis and (2) ERA Interim data for downscaling predictor variables calibrated with data from periods (3) 1948-2009 (NCEP/NCAR R1) and (4) 1979-2009 (NCEP/NCAR R1 and ERA Interim). Additionally, for the ERA Interim case, two different grid resolutions have been used, (5) 2.5° and (6) 0.75°. On the other side, for the NCEP/NCAR R1 case, only the 2.5° resolution has been used. Configuring the SDSM model in this way allows testing the sensitivity of the results to different origins of the predictors, fit to different calibration periods and use of different reanalysis resolutions. On the other hand, ERA Interim data at the highest resolution was used as the initial/boundary conditions to run WRF simulations with a 15 km x 15 km horizontal resolution over the Iberian Peninsula, for two different configurations. The first experiment (N) was run using the same configuration typically used for numerical downscaling, with information being fed through the boundaries of the domain. The second experiment (D) was run using 3DVAR data assimilation at 00UTC, 06UTC, 12UTC and 18UTC. In both cases, WRF simulations were run over the period 2009-2014, using the first year (2009) as spin-up for the soil model. Results from the WRF N and D runs and comparable SDSM set up for the period 2010-2014 were evaluated using observations from ECA and E-OBS datasets. In each case, model skill was assessed using seven daily precipitation metrics (absolute mean, wet-day intensity, 90th percentile, maximum 5-day total, maximum number of consecutive dry days, fraction of total from heavy events and number of heavy events defined here as values over the threshold of 90th percentile. Our results show that the SDSM model improves its behaviour when using predictors from the ERA Interim Reanalysis. Improvements are even more impressive when using the 0.75° resolution for ERA Interim. Better results than using WRF D are obtained with this configuration of the SDSM model for mean precipitation and precipitation intensity. Overall, the analysis reveals the extent to which the skill of SDSM can be improved through judicious choice of downscaling predictor source, grid resolution and calibration period. Moreover, the computationally efficient SDSM tool can achieve comparable skill to WRF over a range of precipitation metrics and the contrasting rainfall regimes of the Iberian Peninsula.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUSM.V43A..06S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUSM.V43A..06S"><span>Assessing Degree of Susceptibility to Landslide Hazard</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sheridan, M. F.; Cordoba, G. A.; Delgado, H.; Stefanescu, R.</p> <p>2013-05-01</p> <p>The modeling of hazardous mass flows, both dry and water saturated, is currently an area of active research and several stable models have now emerged that have differing degrees of physical and mathematical fidelity. Models based on the early work of Savage and Hutter (1989) assume that very large dense granular flows could be modeled as incompressible continua governed by a Coulomb failure criterion. Based on this concept, Patra et al. (2005) developed a code for dry avalanches, which proposes a thin layer mathematical model similar to shallow-water equations. This concept was implemented in the widely-used TITAN2D program, which integrates the shock-capturing Godunov solution methodology for the equation system. We propose a method to assess the susceptibility of specific locations susceptible to landslides following heavy tephra fall using the TIATN2D code. Successful application requires that the range of several uncertainties must be framed in the selection of model input data: 1) initial conditions, like volume and location of origin of the landslide, 2) bed and internal friction parameters and 3) digital elevation model (DEM) uncertainties. Among the possible ways of coping with these uncertainties, we chose to use Latin Hypercube Sampling (LHS). This statistical technique reduces a computationally intractable problem to such an extent that is it possible to apply it, even with current personal computers. LHS requires that there is only one sample in each row and each column of the sampling matrix, where each row (multi-dimensional) corresponds to each uncertainty. LHS requires less than 10% of the sample runs needed by Monte Carlo approaches to achieve a stable solution. In our application LHS output provides model sampling for 4 input parameters: initial random volumes, UTM location (x and y), and bed friction. We developed a simple Octave script to link the output of LHS with TITAN2D. In this way, TITAN2D can run several times with successively different initial conditions provided by the LHC routine. Finally the set of results from TITAN2D are computed to obtain the distribution of maximum exceedance probability given that a landslide occurs at a place of interest. We apply this method to find sectors least prone to be affected by landslides, in a region along the Panamerican Highway in the southern part of Colombia. The goal of such a study is to provide decision makers to improve their assessments regarding permissions for development along the highway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhyA..457..316T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhyA..457..316T"><span>Impacts of the driver's bounded rationality on the traffic running cost under the car-following model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai</p> <p>2016-09-01</p> <p>The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JApMe..41..488W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JApMe..41..488W"><span>Ensemble Simulations with Coupled Atmospheric Dynamic and Dispersion Models: Illustrating Uncertainties in Dosage Simulations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.</p> <p>2002-05-01</p> <p>Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1439174-temporal-decompostion-distribution-system-quasi-static-time-series-simulation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1439174-temporal-decompostion-distribution-system-quasi-static-time-series-simulation"><span>Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mather, Barry A; Hunsberger, Randolph J</p> <p></p> <p>This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.3405B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.3405B"><span>Short-range ensemble predictions based on convection perturbations in the Eta Model for the Serra do Mar region in Brazil</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bustamante, J. F. F.; Chou, S. C.; Gomes, J. L.</p> <p>2009-04-01</p> <p>The Southeast Brazil, in the coastal and mountain region called Serra do Mar, between Sao Paulo and Rio de Janeiro, is subject to frequent events of landslides and floods. The Eta Model has been producing good quality forecasts over South America at about 40-km horizontal resolution. For that type of hazards, however, more detailed and probabilistic information on the risks should be provided with the forecasts. Thus, a short-range ensemble prediction system (SREPS) based on the Eta Model is being constructed. Ensemble members derived from perturbed initial and lateral boundary conditions did not provide enough spread for the forecasts. Members with model physics perturbation are being included and tested. The objective of this work is to construct more members for the Eta SREPS by adding physics perturbed members. The Eta Model is configured at 10-km resolution and 38 layers in the vertical. The domain covered is most of Southeast Brazil, centered over the Serra do Mar region. The constructed members comprise variations of the cumulus parameterization Betts-Miller-Janjic (BMJ) and Kain-Fritsch (KF) schemes. Three members were constructed from the BMJ scheme by varying the deficit of saturation pressure profile over land and sea, and 2 members of the KF scheme were included using the standard KF and a momentum flux added to KF scheme version. One of the runs with BMJ scheme is the control run as it was used for the initial condition perturbation SREPS. The forecasts were tested for 6 cases of South America Convergence Zone (SACZ) events. The SACZ is a common summer season feature of Southern Hemisphere that causes persistent rain for a few days over the Southeast Brazil and it frequently organizes over Serra do Mar region. These events are particularly interesting because of the persistent rains that can accumulate large amounts and cause generalized landslides and death. With respect to precipitation, the KF scheme versions have shown to be able to reach the larger precipitation peaks of the events. On the other hand, for predicted 850-hPa temperature, the KF scheme versions produce positive bias and BMJ versions produce negative bias. Therefore, the ensemble mean forecast of 850-hPa temperature of this SREPS exhibits smaller error than the control member. Specific humidity shows smaller bias in the KF scheme. In general, the ensemble mean produced forecasts closer to the observations than the control run.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26097232','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26097232"><span>Application of multivariate analysis and mass transfer principles for refinement of a 3-L bioreactor scale-down model--when shake flasks mimic 15,000-L bioreactors better.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa</p> <p>2015-01-01</p> <p>Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A41G0130M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A41G0130M"><span>Verification and Validation of a Navy ESPC Hindcast with Loosely Coupled Data Assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Metzger, E. J.; Barton, N. P.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T. R.; Ridout, J. A.; Franklin, D. S.; Zamudio, L.; Posey, P. G.; Reynolds, C. A.; Phelps, M.</p> <p>2016-12-01</p> <p>The US Navy is developing an Earth System Prediction Capability (ESPC) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. It will be a fully coupled global atmosphere/ocean/ice/wave/land prediction system providing daily deterministic forecasts out to 16 days at high horizontal and vertical resolution, and daily probabilistic forecasts out to 45 days at lower resolution. The system will run at the Navy DoD Supercomputing Resource Center with an initial operational capability scheduled for the end of FY18 and the final operational capability scheduled for FY22. The individual model and data assimilation components include: atmosphere - NAVy Global Environmental Model (NAVGEM) and Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR); ocean - HYbrid Coordinate Ocean Model (HYCOM) and Navy Coupled Ocean Data Assimilation (NCODA); ice - Community Ice CodE (CICE) and NCODA; WAVEWATCH III™ and NCODA; and land - NAVGEM Land Surface Model (LSM). Currently, NAVGEM/HYCOM/CICE are three-way coupled and each model component is cycling with its respective assimilation scheme. The assimilation systems do not communicate with each other, but future plans call for these to be coupled as well. NAVGEM runs with a 6-hour update cycle while HYCOM/CICE run with a 24-hour update cycle. The T359L50 NAVGEM/0.08° HYCOM/0.08° CICE system has been integrated in hindcast mode and verification/validation metrics have been computed against unassimilated observations and against stand-alone versions of NAVGEM and HYCOM/CICE. This presentation will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled ESPC system is performing with comparable skill to the stand-alone systems at the nowcast time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://climate.pnnl.gov','SCIGOVWS'); return false;" href="http://climate.pnnl.gov"><span>PNNL: Climate Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Runs [ <em>Open</em> Access : Password Protected ] CESM Development CESM Runs [ <em>Open</em> Access : Password Protected ] WRF Development WRF Runs [ <em>Open</em> Access : Password Protected ] Climate Modeling Home Projects Links Literature Manuscripts Publications Polar <em>Group</em> Meeting (2012) ASGC Home ASGC Jobs Web Calendar Wiki Internal</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1038871','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1038871"><span>Systematic approach to verification and validation: High explosive burn models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Menikoff, Ralph; Scovel, Christina A.</p> <p>2012-04-16</p> <p>Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820017703&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dseasonal%2Bforecast','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820017703&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dseasonal%2Bforecast"><span>The seasonal-cycle climate model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Marx, L.; Randall, D. A.</p> <p>1981-01-01</p> <p>The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2006/5137/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2006/5137/"><span>A graphical modeling tool for evaluating nitrogen loading to and nitrate transport in ground water in the mid-Snake region, south-central Idaho</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Clark, David W.; Skinner, Kenneth D.; Pollock, David W.</p> <p>2006-01-01</p> <p>A flow and transport model was created with a graphical user interface to simplify the evaluation of nitrogen loading and nitrate transport in the mid-Snake region in south-central Idaho. This model and interface package, the Snake River Nitrate Scenario Simulator, uses the U.S. Geological Survey's MODFLOW 2000 and MOC3D models. The interface, which is enabled for use with geographic information systems (GIS), was created using ESRI's royalty-free MapObjects LT software. The interface lets users view initial nitrogen-loading conditions (representing conditions as of 1998), alter the nitrogen loading within selected zones by specifying a multiplication factor and applying it to the initial condition, run the flow and transport model, and view a graphical representation of the modeling results. The flow and transport model of the Snake River Nitrate Scenario Simulator was created by rediscretizing and recalibrating a clipped portion of an existing regional flow model. The new subregional model was recalibrated with newly available water-level data and spring and ground-water nitrate concentration data for the study area. An updated nitrogen input GIS layer controls the application of nitrogen to the flow and transport model. Users can alter the nitrogen application to the flow and transport model by altering the nitrogen load in predefined spatial zones contained within similar political, hydrologic, and size-constrained boundaries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AtmEn..77..990Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AtmEn..77..990Z"><span>A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Žabkar, Rahela; Koračin, Darko; Rakovec, Jože</p> <p>2013-10-01</p> <p>A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5548181','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5548181"><span>Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua</p> <p>2017-01-01</p> <p>In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23003216','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23003216"><span>Phast4Windows: a 3D graphical user interface for the reactive-transport simulator PHAST.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Charlton, Scott R; Parkhurst, David L</p> <p>2013-01-01</p> <p>Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties-the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones-and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27271627','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27271627"><span>Street Viewer: An Autonomous Vision Based Traffic Tracking System.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano</p> <p>2016-06-03</p> <p>The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22525780-radiative-corrections-from-heavy-fast-roll-fields-during-inflation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22525780-radiative-corrections-from-heavy-fast-roll-fields-during-inflation"><span>Radiative corrections from heavy fast-roll fields during inflation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S., E-mail: jain@cp3.dias.sdu.dk, E-mail: sandora@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk</p> <p>2015-06-01</p> <p>We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22454557-radiative-corrections-from-heavy-fast-roll-fields-during-inflation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22454557-radiative-corrections-from-heavy-fast-roll-fields-during-inflation"><span>Radiative corrections from heavy fast-roll fields during inflation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S.</p> <p>2015-06-09</p> <p>We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNS41B0011J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNS41B0011J"><span>Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jazayeri, S.; Kruse, S.</p> <p>2017-12-01</p> <p>We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28430546','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28430546"><span>Overall Preference of Running Shoes Can Be Predicted by Suitable Perception Factors Using a Multiple Regression Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah</p> <p>2017-05-01</p> <p>This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003APS..DPPFP1117E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003APS..DPPFP1117E"><span>Web Service Model for Plasma Simulations with Automatic Post Processing and Generation of Visual Diagnostics*</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.</p> <p>2003-10-01</p> <p>We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140011332','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140011332"><span>Yield Behavior of Solution Treated and Aged Ti-6Al-4V</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ring, Andrew J.; Baker, Eric H.; Salem, Jonathan A.; Thesken, John C.</p> <p>2014-01-01</p> <p>Post yield uniaxial tension-compression tests were run on a solution treated and aged (STA), titanium 6-percent aluminum 4-percent vanadium (Ti-6Al-4V) alloy to determine the yield behavior on load reversal. The material exhibits plastic behavior almost immediately on load reversal implying a strong Bauschinger effect. The resultant stress-strain data was compared to a 1D mechanics model and a finite element model used to design a composite overwrapped pressure vessel (COPV). Although the models and experimental data compare well for the initial loading and unloading in the tensile regime, agreement is lost in the compressive regime due to the Bauschinger effect and the assumption of perfect plasticity. The test data presented here are being used to develop more accurate cyclic hardening constitutive models for future finite element design analysis of COPVs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20120013630&hterms=wave+oscillation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dwave%2Boscillation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20120013630&hterms=wave+oscillation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dwave%2Boscillation"><span>Genesis of Twin Tropical Cyclones as Revealed by a Global Mesoscale Model: The Role of Mixed Rossby Gravity Waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shen, Bo-Wen; Tao, Wei-Kuo; Lin, Yuh-Lang; Laing, Arlene</p> <p>2012-01-01</p> <p>In this study, it is proposed that twin tropical cyclones (TCs), Kesiny and 01A, in May 2002 formed in association with the scale interactions of three gyres that appeared as a convectively-coupled mixed Rossby gravity (ccMRG) wave during an active phase of the Madden-Julian Oscillation (MJO). This is shown by analyzing observational data and performing simulations using a global mesoscale model. A 10-day control run is initialized at 0000 UTC 1 May 2002 with grid-scale condensation but no cumulus parameterizations. The ccMRG wave was identified as encompassing two developing and one non-developing gyres, the first two of which intensified and evolved into the twin TCs. The control run is able to reproduce the evolution of the ccMRG wave and the formation of the twin TCs about two and five days in advance as well as their subsequent intensity evolution and movement within an 8-10 day period. Five additional 10-day sensitivity experiments with different model configurations are conducted to help understand the interaction of the three gyres. These experiments suggest the improved lead time in the control run may be attributed to the realistic simulation of the ccMRG wave with the following processes: (I) wave deepening associated with wave shortening and/or the intensification of individual gyres, (2) poleward movement of gyres that may be associated with bOlll1dary layer processes, (3) realistic simulation of moist processes at regional scales in association with each of the gyres, and (4) the vertical phasing of low- and mid-level cyclonic circulations associated with a specific gyre.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21289692-positron-spectroscopy-investigation-normal-brain-section-brain-section-glioma-derived-from-rat-glioma-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21289692-positron-spectroscopy-investigation-normal-brain-section-brain-section-glioma-derived-from-rat-glioma-model"><span>Positron Spectroscopy Investigation of Normal Brain Section and Brain Section with Glioma Derived from a Rat Glioma Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yang, SH.; Ballmann, C.; Quarles, C. A.</p> <p>2009-03-10</p> <p>The application of positron annihilation lifetime spectroscopy (PALS) and Doppler broadening spectroscopy (DBS) to the study of animal or human tissue has only recently been reported [G. Liu, et al. phys. stat. sol. (C) 4, Nos. 10, 3912-3915 (2007)]. We have initiated a study of normal brain section and brain section with glioma derived from a rat glioma model. For the rat glioma model, 200,000 C6 cells were implanted in the basal ganglion of adult Sprague Dawley rats. The rats were sacrificed at 21 days after implantation. The brains were harvested, sliced into 2 mm thick coronal sections, and fixedmore » in 4% formalin. PALS lifetime runs were made with the samples soaked in formalin, and there was not significant evaporation of formalin during the runs. The lifetime spectra were analyzed into two lifetime components. While early results suggested a small decrease in ortho-Positronium (o-Ps) pickoff lifetime between the normal brain section and brain section with glioma, further runs with additional samples have showed no statistically significant difference between the normal and tumor tissue for this type of tumor. The o-Ps lifetime in formalin alone was lower than either the normal tissue or glioma sample. So annihilation in the formalin absorbed in the samples would lower the o-Ps lifetime and this may have masked any difference due to the glioma itself. DBS was also used to investigate the difference in positronium formation between tumor and normal tissue. Tissue samples are heterogeneous and this needs to be carefully considered if PALS and DBS are to become useful tools in distinguishing tissue samples.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140010321','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140010321"><span>Genesis of Twin Tropical Cyclones as Revealed by a Global Mesoscale Model: The Role of Mixed Rossby Gravity Waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shen, Bo-Wen; Tao, Wei-Kuo; Lin, Yuh-Lang; Laing, Arlene</p> <p>2012-01-01</p> <p>In this study, it is proposed that twin tropical cyclones (TCs), Kesiny and 01A, in May 2002 formed in association with the scale interactions of three gyres that appeared as a convectively coupled mixed Rossby gravity (ccMRG) wave during an active phase of the Madden-Julian Oscillation (MJO). This is shown by analyzing observational data, including NCEP reanalysis data and METEOSAT 7 IR satellite imagery, and performing numerical simulations using a global mesoscale model. A 10-day control run is initialized at 0000 UTC 1 May 2002 with grid-scale condensation but no sub-grid cumulus parameterizations. The ccMRG wave was identified as encompassing two developing and one non-developing gyres, the first two of which intensified and evolved into the twin TCs. The control run is able to reproduce the evolution of the ccMRG wave and thus the formation of the twin TCs about two and five days in advance as well as their subsequent intensity evolution and movement within an 8-10 day period. Five additional 10-day sensitivity experiments with different model configurations are conducted to help understand the interaction of the three gyres, leading to the formation of the TCs. These experiments suggest the improved lead time in the control run may be attributed to the realistic simulation of the ccMRG wave with the following processes: (1) wave deepening (intensification) associated with a reduction in wavelength and/or the intensification of individual gyres, (2) poleward movement of gyres that may be associated with boundary layer processes, (3) realistic simulation of moist processes at regional scales in association with each of the gyres, and (4) the vertical phasing of low- and mid-level cyclonic circulations associated with a specific gyre.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>