Sample records for model runs based

  1. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  2. Defensive Swarm: An Agent Based Modeling Analysis

    DTIC Science & Technology

    2017-12-01

    INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43  1.  Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances

  3. Design and Development of a Model to Simulate 0-G Treadmill Running Using the European Space Agency's Subject Loading System

    NASA Technical Reports Server (NTRS)

    Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.

    2010-01-01

    Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.

  4. Implications of random variation in the Stand Prognosis Model

    Treesearch

    David A. Hamilton

    1991-01-01

    Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...

  5. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  6. A Method for Generating Reduced Order Linear Models of Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1997-01-01

    For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.

  7. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  8. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  9. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-10-01

    are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...range 700 m/s to 1000 m/s are modeled using SPH elements. □ Model validation runs with monolithic SiC tiles are conducted based on the DoP

  10. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  11. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  12. A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)

    Treesearch

    Glen E. Liston; Kelly Elder

    2006-01-01

    An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...

  13. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  14. Assessing experience in the deliberate practice of running using a fuzzy decision-support system

    PubMed Central

    Roveri, Maria Isabel; Manoel, Edison de Jesus; Onodera, Andrea Naomi; Ortega, Neli R. S.; Tessutti, Vitor Daniel; Vilela, Emerson; Evêncio, Nelson

    2017-01-01

    The judgement of skill experience and its levels is ambiguous though it is crucial for decision-making in sport sciences studies. We developed a fuzzy decision support system to classify experience of non-elite distance runners. Two Mamdani subsystems were developed based on expert running coaches’ knowledge. In the first subsystem, the linguistic variables of training frequency and volume were combined and the output defined the quality of running practice. The second subsystem yielded the level of running experience from the combination of the first subsystem output with the number of competitions and practice time. The model results were highly consistent with the judgment of three expert running coaches (r>0.88, p<0.001) and also with five other expert running coaches (r>0.86, p<0.001). From the expert’s knowledge and the fuzzy model, running experience is beyond the so-called "10-year rule" and depends not only on practice time, but on the quality of practice (training volume and frequency) and participation in competitions. The fuzzy rule-based model was very reliable, valid, deals with the marked ambiguities inherent in the judgment of experience and has potential applications in research, sports training, and clinical settings. PMID:28817655

  15. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.

  16. Effects of Training Leaders in Needs-Based Methods of Running Meetings

    ERIC Educational Resources Information Center

    Douglass, Emily M.; Malouff, John M.; Rangan, Julie A.

    2015-01-01

    This study evaluated the effects of brief training in how to lead organizational meetings. The training was based on an attendee-needs-based model of running meetings. Twelve mid-level managers completed the training. The study showed a significant pre to post increase in the number of needs-based behaviors displayed by meeting leaders and in…

  17. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  18. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  19. Integrating Geo-Spatial Data for Regional Landslide Susceptibility Modeling in Consideration of Run-Out Signature

    NASA Astrophysics Data System (ADS)

    Lai, J.-S.; Tsai, F.; Chiang, S.-H.

    2016-06-01

    This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.

  20. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-09-09

    targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets...and ceramic-faced aluminum targets with „30cal AP M2 projectile using SPH elements. □ Model validation runs were conducted based on the DoP

  1. Kameleon Live: An Interactive Cloud Based Analysis and Visualization Platform for Space Weather Researchers

    NASA Astrophysics Data System (ADS)

    Pembroke, A. D.; Colbert, J. A.

    2015-12-01

    The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.

  2. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  3. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  4. Teaching Bank Runs with Classroom Experiments

    ERIC Educational Resources Information Center

    Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy

    2011-01-01

    Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…

  5. Run Environment and Data Management for Earth System Models

    NASA Astrophysics Data System (ADS)

    Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.

    2009-04-01

    The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.

  6. NONMEMory: a run management tool for NONMEM.

    PubMed

    Wilkins, Justin J

    2005-06-01

    NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.

  7. Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre

    2009-01-01

    The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.

  8. Evaluation of quality of precipitation products: A case study using WRF and IMERG data over the central United States

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lin, L. F.; Bras, R. L.

    2017-12-01

    Hydrological applications rely on the availability and quality of precipitation products, specially model- and satellite-based products for use in areas without ground measurements. It is known that the quality of model- and satellite-based precipitation products are complementary—model-based products exhibiting high quality during winters while satellite-based products seem to be better during summers. To explore that behavior, this study uses 2-m air temperature as auxiliary information to evaluate high-resolution (0.1°×0.1° every hour) precipitation products from Weather Research and Forecasting (WRF) simulations and from version-4 Integrated Multi-satellite Retrievals for GPM (IMERG) early and final runs. The products are evaluated relative to the reference NCEP Stage IV precipitation estimates over the central United States in 2016. The results show that the WRF and IMERG final-run estimates are nearly unbiased while the IMERG early-run estimates positively biased. The results also show that the WRF estimates exhibit high correlations with the reference data when the temperature falls below 280°K and the IMERG estimates (i.e., both early and final runs) do so when the temperature exceeds 280°K. Moreover, the temperature threshold of 280°K, which distinguishes the quality of the WRF and the IMERG products, does not vary significantly with either season or location. This study not only adds insight into current precipitation research on the quality of precipitation products but also suggests a simple way for choosing either a model- or satellite-based product or a hybrid model/satellite product for applications.

  9. Characterizing the Mechanical Properties of Running-Specific Prostheses

    PubMed Central

    Beck, Owen N.; Taboga, Paolo; Grabowski, Alena M.

    2016-01-01

    The mechanical stiffness of running-specific prostheses likely affects the functional abilities of athletes with leg amputations. However, each prosthetic manufacturer recommends prostheses based on subjective stiffness categories rather than performance based metrics. The actual mechanical stiffness values of running-specific prostheses (i.e. kN/m) are unknown. Consequently, we sought to characterize and disseminate the stiffness values of running-specific prostheses so that researchers, clinicians, and athletes can objectively evaluate prosthetic function. We characterized the stiffness values of 55 running-specific prostheses across various models, stiffness categories, and heights using forces and angles representative of those measured from athletes with transtibial amputations during running. Characterizing prosthetic force-displacement profiles with a 2nd degree polynomial explained 4.4% more of the variance than a linear function (p<0.001). The prosthetic stiffness values of manufacturer recommended stiffness categories varied between prosthetic models (p<0.001). Also, prosthetic stiffness was 10% to 39% less at angles typical of running 3 m/s and 6 m/s (10°-25°) compared to neutral (0°) (p<0.001). Furthermore, prosthetic stiffness was inversely related to height in J-shaped (p<0.001), but not C-shaped, prostheses. Running-specific prostheses should be tested under the demands of the respective activity in order to derive relevant characterizations of stiffness and function. In all, our results indicate that when athletes with leg amputations alter prosthetic model, height, and/or sagittal plane alignment, their prosthetic stiffness profiles also change; therefore variations in comfort, performance, etc. may be indirectly due to altered stiffness. PMID:27973573

  10. Risk factors for lower extremity injuries among half marathon and marathon runners of the Lage Landen Marathon Eindhoven 2012: A prospective cohort study in the Netherlands.

    PubMed

    van Poppel, D; de Koning, J; Verhagen, A P; Scholten-Peeters, G G M

    2016-02-01

    To determine risk factors for running injuries during the Lage Landen Marathon Eindhoven 2012. Prospective cohort study. Population-based study. This study included 943 runners. Running injuries after the Lage Landen Marathon. Sociodemographic and training-related factors as well as lifestyle factors were considered as potential risk factors and assessed in a questionnaire 1 month before the running event. The association between potential risk factors and injuries was determined, per running distance separately, using univariate and multivariate logistic regression analysis. In total, 154 respondents sustained a running injury. Among the marathon runners, in the univariate model, body mass index ≥ 26 kg/m(2), ≤ 5 years of running experience, and often performing interval training, were significantly associated with running injuries, whereas in the multivariate model only ≤ 5 years of running experience and not performing interval training on a regular basis were significantly associated with running injuries. Among marathon runners, no multivariate model could be created because of the low number of injuries and participants. This study indicates that interval training on a regular basis may be recommended to marathon runners to reduce the risk of injury. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Bed composition generation for morphodynamic modeling: Case study of San Pablo Bay in California, USA

    USGS Publications Warehouse

    van der Wegen, M.; Dastgheib, A.; Jaffe, B.E.; Roelvink, D.

    2011-01-01

    Applications of process-based morphodynamic models are often constrained by limited availability of data on bed composition, which may have a considerable impact on the modeled morphodynamic development. One may even distinguish a period of "morphodynamic spin-up" in which the model generates the bed level according to some ill-defined initial bed composition rather than describing the realistic behavior of the system. The present paper proposes a methodology to generate bed composition of multiple sand and/or mud fractions that can act as the initial condition for the process-based numerical model Delft3D. The bed composition generation (BCG) run does not include bed level changes, but does permit the redistribution of multiple sediment fractions over the modeled domain. The model applies the concept of an active layer that may differ in sediment composition above an underlayer with fixed composition. In the case of a BCG run, the bed level is kept constant, whereas the bed composition can change. The approach is applied to San Pablo Bay in California, USA. Model results show that the BCG run reallocates sand and mud fractions over the model domain. Initially, a major sediment reallocation takes place, but development rates decrease in the longer term. Runs that take the outcome of a BCG run as a starting point lead to more gradual morphodynamic development. Sensitivity analysis shows the impact of variations in the morphological factor, the active layer thickness, and wind waves. An important but difficult to characterize criterion for a successful application of a BCG run is that it should not lead to a bed composition that fixes the bed so that it dominates the "natural" morphodynamic development of the system. Future research will focus on a decadal morphodynamic hindcast and comparison with measured bathymetries in San Pablo Bay so that the proposed methodology can be tested and optimized. ?? 2010 The Author(s).

  12. Does a crouched leg posture enhance running stability and robustness?

    PubMed

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Towards a complex systems approach in sports injury research: simulating running-related injury development with agent-based modelling.

    PubMed

    Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M

    2018-06-18

    There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  15. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  16. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  17. The global warming in the North Atlantic Sector and the role of the ocean

    NASA Astrophysics Data System (ADS)

    Hand, R.; Keenlyside, N. S.; Greatbatch, R. J.; Omrani, N. E.

    2014-12-01

    This work presents an analysis of North Atlantic ocean-atmosphere interaction in a warming climate, based on a long-term earth system model experiment forced by the RCP 8.5 scenario, the strongest greenhouse gas forcing used in the climate projections for the 5th Assessement report of the Intergovernmental Panel on Climate Change). In addition to a global increase in SSTs as a direct response to the radiative forcing, the model shows a distinct change of the local sea surface temperature (SST hereafter) patterns in the Gulf Stream region: The SST front moves northward by several hundred kilometers, likely as a response of the wind-driven part of the oceanic surface circulation, and becomes more zonal. As a consequence of a massive slowdown of the Atlantic Meridional Overturning Circulation, the northeast North Atlantic only shows a moderate warming compared to the rest of the ocean. The feedback of these changes on the atmosphere was studied in a set of sensitivity experiments based on the SST climatology of the coupled runs. The set consists of a control run based on the historical run, a run using the full SST from the coupled RCP 8.5 run and two runs, where the SST signal was deconstructed into a homogenous mean warming part and a local pattern change. In the region of the precipitation maximum in the historical run the future scenario shows an increase of absolute SSTs, but a significant decrease in local precipitation, low-level convergence and upward motion. Since warmer SSTs usually cause the opposite, this indicates that the local response in that region is connected to the (with respect to the historical run) weakened SST gradients rather than to the absolute SST. Consistently, the model shows enhanced precipitation north of this region, where the SST gradients are enhanced. However, the signal restricts to the low and mid-troposphere and does not reach the higher model levels. There is little evidence for a large-scale response to the changes in the Gulf Stream region; instead, the large scale signal is mainly controlled by the warmer background state and the AMOC slowdown and influenced by tropical SSTs. In a warmer climate the same change in SST gradient has a stronger effect on precipitation and the model produces a slightly enhanced North Atlantic storm track.

  18. Econometric Model of Rice Policy Based On Presidential Instruction

    NASA Astrophysics Data System (ADS)

    Abadi Sembiring, Surya; Hutauruk, Julia

    2018-01-01

    The objective of research is to build an econometric model based on Presidential Instruction rice policy. The data was monthly time series from March 2005 to September 2009. Rice policy model specification using simultaneous equation, consisting of 14 structural equations and four identity equation, which was estimated using Two Stages Least Squares (2SLS) method. The results show that: (1) an increase of government purchasing price of dried harvest paddy has a positive impact on to increase in total rice production and community rice stock, (2) an increase community rice stock lead to decrease the rice imports, (3) an increase of the realization of the distribution of subsidized ZA fertilizers and the realization of the distribution of subsidized NPK fertilizers has a positive impact on to increase in total rice production and community rice stock and to reduce rice imports, (4) the price of the dried harvest paddy is highly responsive to the water content of dried harvest paddy both the short run and long run, (5) the quantity of rice imported is highly responsive to the imported rice price, both short run and long run.

  19. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    PubMed Central

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  20. Refining the modelling of vehicle-track interaction

    NASA Astrophysics Data System (ADS)

    Kaiser, Ingo

    2012-01-01

    An enhanced model of a passenger coach running on a straight track is developed. This model includes wheelsets modelled as rotating flexible bodies, a track consisting of flexible rails supported on discrete sleepers and wheel-rail contact modules, which can describe non-elliptic contact patches based on a boundary element method (BEM). For the scenarios of undisturbed centred running and permanent hunting, the impact of the structural deformations of the wheelsets and the rails on the stress distribution in the wheel-rail contact is investigated.

  1. Hydroclimatic projections for the Murray-Darling Basin based on an ensemble derived from Intergovernmental Panel on Climate Change AR4 climate models

    NASA Astrophysics Data System (ADS)

    Sun, Fubao; Roderick, Michael L.; Lim, Wee Ho; Farquhar, Graham D.

    2011-12-01

    We assess hydroclimatic projections for the Murray-Darling Basin (MDB) using an ensemble of 39 Intergovernmental Panel on Climate Change AR4 climate model runs based on the A1B emissions scenario. The raw model output for precipitation, P, was adjusted using a quantile-based bias correction approach. We found that the projected change, ΔP, between two 30 year periods (2070-2099 less 1970-1999) was little affected by bias correction. The range for ΔP among models was large (˜±150 mm yr-1) with all-model run and all-model ensemble averages (4.9 and -8.1 mm yr-1) near zero, against a background climatological P of ˜500 mm yr-1. We found that the time series of actually observed annual P over the MDB was indistinguishable from that generated by a purely random process. Importantly, nearly all the model runs showed similar behavior. We used these facts to develop a new approach to understanding variability in projections of ΔP. By plotting ΔP versus the variance of the time series, we could easily identify model runs with projections for ΔP that were beyond the bounds expected from purely random variations. For the MDB, we anticipate that a purely random process could lead to differences of ±57 mm yr-1 (95% confidence) between successive 30 year periods. This is equivalent to ±11% of the climatological P and translates into variations in runoff of around ±29%. This sets a baseline for gauging modeled and/or observed changes.

  2. TESTING CMAQ CHEMISTRY SENSITIVITIES IN BASE CHASE AND EMISSION CONTROL RUNS AT SEARCH AND SOS 99 SURFACE SITES IN THE SOUTHEASTERN UNITED STATES

    EPA Science Inventory

    CMAQ was run to simulate urban conditions in the southeastern U.S. in July 1999 at 32, 8, and 2 km grid spacings. Runs were made with two older mechanisms, Carbon Bond IV (CB4) and the Regional Acid Deposition Model, version 2 (RADM2), and with the more recent California Statewid...

  3. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  4. Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment

    NASA Astrophysics Data System (ADS)

    Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat

    2017-08-01

    The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.

  5. Modelface: an Application Programming Interface (API) for Homology Modeling Studies Using Modeller Software

    PubMed Central

    Sakhteman, Amirhossein; Zare, Bijan

    2016-01-01

    An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276

  6. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  7. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  8. Computer Science Career Network

    DTIC Science & Technology

    2013-03-01

    development model. TopCoder’s development model is competition-based, meaning that TopCoder conducts competitions to develop digital assets. TopCoder...success in running a competition that had as an objective creating digital assets, and we intend to run more of them, to create assets for...cash prizes and merchandise . This includes social media contests, contests will all our games, special referral contests, and a couple NASA

  9. A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data

    PubMed Central

    Li, Pengfei; Li, Yan; Guo, Xiucheng

    2014-01-01

    The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems. PMID:25435870

  10. A red-light running prevention system based on artificial neural network and vehicle trajectory data.

    PubMed

    Li, Pengfei; Li, Yan; Guo, Xiucheng

    2014-01-01

    The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.

  11. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    NASA Astrophysics Data System (ADS)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  12. Using the Activity-based Anorexia Rodent Model to Study the Neurobiological Basis of Anorexia Nervosa.

    PubMed

    Chowdhury, Tara Gunkali; Chen, Yi-Wen; Aoki, Chiye

    2015-10-22

    Anorexia nervosa (AN) is a psychiatric illness characterized by excessively restricted caloric intake and abnormally high levels of physical activity. A challenging illness to treat, due to the lack of understanding of the underlying neurobiology, AN has the highest mortality rate among psychiatric illnesses. To address this need, neuroscientists are using an animal model to study how neural circuits may contribute toward vulnerability to AN and may be affected by AN. Activity-based anorexia (ABA) is a bio-behavioral phenomenon described in rodents that models the key symptoms of anorexia nervosa. When rodents with free access to voluntary exercise on a running wheel experience food restriction, they become hyperactive - running more than animals with free access to food. Here, we describe the procedures by which ABA is induced in adolescent female C57BL/6 mice. On postnatal day 36 (P36), the animal is housed with access to voluntary exercise on a running wheel. After 4 days of acclimation to the running wheel, on P40, all food is removed from the cage. For the next 3 days, food is returned to the cage (allowing animals free food access) for 2 hr daily. After the fourth day of food restriction, free access to food is returned and the running wheel is removed from the cage to allow the animals to recover. Continuous multi-day analysis of running wheel activity shows that mice become hyperactive within 24 hr following the onset of food restriction. The mice run even during the limited time during which they have access to food. Additionally, the circadian pattern of wheel running becomes disrupted by the experience of food restriction. We have been able to correlate neurobiological changes with various aspects of the animals' wheel running behavior to implicate particular brain regions and neurochemical changes with resilience and vulnerability to food-restriction induced hyperactivity.

  13. Using the Activity-based Anorexia Rodent Model to Study the Neurobiological Basis of Anorexia Nervosa

    PubMed Central

    Chowdhury, Tara Gunkali; Chen, Yi-Wen; Aoki, Chiye

    2015-01-01

    Anorexia nervosa (AN) is a psychiatric illness characterized by excessively restricted caloric intake and abnormally high levels of physical activity. A challenging illness to treat, due to the lack of understanding of the underlying neurobiology, AN has the highest mortality rate among psychiatric illnesses. To address this need, neuroscientists are using an animal model to study how neural circuits may contribute toward vulnerability to AN and may be affected by AN. Activity-based anorexia (ABA) is a bio-behavioral phenomenon described in rodents that models the key symptoms of anorexia nervosa. When rodents with free access to voluntary exercise on a running wheel experience food restriction, they become hyperactive – running more than animals with free access to food. Here, we describe the procedures by which ABA is induced in adolescent female C57BL/6 mice. On postnatal day 36 (P36), the animal is housed with access to voluntary exercise on a running wheel. After 4 days of acclimation to the running wheel, on P40, all food is removed from the cage. For the next 3 days, food is returned to the cage (allowing animals free food access) for 2 hr daily. After the fourth day of food restriction, free access to food is returned and the running wheel is removed from the cage to allow the animals to recover. Continuous multi-day analysis of running wheel activity shows that mice become hyperactive within 24 hr following the onset of food restriction. The mice run even during the limited time during which they have access to food. Additionally, the circadian pattern of wheel running becomes disrupted by the experience of food restriction. We have been able to correlate neurobiological changes with various aspects of the animals’ wheel running behavior to implicate particular brain regions and neurochemical changes with resilience and vulnerability to food-restriction induced hyperactivity. PMID:26555618

  14. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  15. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    NASA Astrophysics Data System (ADS)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.

  16. Geographic Information System and Remote Sensing Approach with Hydrologic Rational Model for Flood Event Analysis in Jakarta

    NASA Astrophysics Data System (ADS)

    Aditya, M. R.; Hernina, R.; Rokhmatuloh

    2017-12-01

    Rapid development in Jakarta which generates more impervious surface has reduced the amount of rainfall infiltration into soil layer and increases run-off. In some events, continuous high rainfall intensity could create sudden flood in Jakarta City. This article used rainfall data of Jakarta during 10 February 2015 to compute rainfall intensity and then interpolate it with ordinary kriging technique. Spatial distribution of rainfall intensity then overlaid with run-off coefficient based on certain land use type of the study area. Peak run-off within each cell resulted from hydrologic rational model then summed for the whole study area to generate total peak run-off. For this study area, land use types consisted of 51.9 % industrial, 37.57% parks, and 10.54% residential with estimated total peak run-off 6.04 m3/sec, 0.39 m3/sec, and 0.31 m3/sec, respectively.

  17. An AOGCM based assessment of interseasonal variability in Pakistan

    NASA Astrophysics Data System (ADS)

    Asmat, U.; Athar, H.; Nabeel, A.; Latif, M.

    2018-01-01

    The interseasonal variability of two basic climatic parameters (precipitation and temperature) is assessed, over vulnerable and data sparse region of Pakistan (23° to 37°N and 60° to 75°E), for two Coupled Model Intercomparison Project 3 (CMIP3) based Atmospheric-Oceanic General Circulation Model (AOGCM) versions: CM2.0 and CM2.1 by Geophysical Fluid Dynamics Laboratory (GFDL), and two CMIP5 based AOGCM versions: CM2p1 and CM3.0. A recent historical 50-year period (1951-2000) is analyzed and compared with APHRODITE for precipitation and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis based gridded datasets for temperature for the following four seasons: DJF, MAM, JJA, and SON. The study area is divided into three regions: all Pakistan, northern Pakistan, and southern Pakistan. The interseasonal variability of the precipitation and the temperature are derived from all three (five) runs of CM2.0 (CM2.1) and from all ten (five) runs of CM 2p1 (CM3.0). The bias, root mean square error (RMSE), one-sigma standard deviation (SD) and correlation coefficient (CC) are used as assessing metrics. The following individual runs have positive CC with respect to APHRODITE at ≤1% Confidence Level (CL). On seasonal basis for CMIP5 based GFDL models during DJF: CM2p1R5 (for all Pakistan), CM2p1R5 (for northern Pakistan), and during MAM: CM2p1R5 (for southern Pakistan; this run has the lowest centered RMSE of 0.11 mm/day), whereas on annual basis: CM3.0R3 (for all Pakistan). However, out of these four runs, only CM2p1 (for southern Pakistan) has SD < SDobs (0.08 < 0.12 mm/day). There are 13 other runs for which the positive CC is at ≤5% CL, both relative to observed precipitation or temperature. Out of these 13 runs, only the average of runs of GFDL-CM2.1 in CMIP3 in JJA in southern Pakistan has SD < SDobs (0.56 < 0.59 °C) with a centered RMSE value of 0.65 °C. These characteristics of GFDL-CM2p1 runs are supported by their relatively better simulation of spatial distribution of 1000-850 hPa averaged layer wind patterns, relative to NCEP/NCAR 1000-850 hPa averaged wind patterns, over Pakistan, in respective seasons. A variance based bias adjustment when applied displays considerable interseasonal bias reduction both in precipitation and temperature in long term mean with no change in trend.

  18. An approach to combining parallel and cross-over trials with and without run-in periods using individual patient data.

    PubMed

    Tvete, Ingunn F; Olsen, Inge C; Fagerland, Morten W; Meland, Nils; Aldrin, Magne; Smerud, Knut T; Holden, Lars

    2012-04-01

    In active run-in trials, where patients may be excluded after a run-in period based on their response to the treatment, it is implicitly assumed that patients have individual treatment effects. If individual patient data are available, active run-in trials can be modelled using patient-specific random effects. With more than one trial on the same medication available, one can obtain a more precise overall treatment effect estimate. We present a model for joint analysis of a two-sequence, four-period cross-over trial (AABB/BBAA) and a three-sequence, two-period active run-in trial (AB/AA/A), where the aim is to investigate the effect of a new treatment for patients with pain due to osteoarthritis. Our approach enables us to separately estimate the direct treatment effect for all patients, for the patients excluded after the active run-in trial prior to randomisation, and for the patients who completed the active run-in trial. A similar model approach can be used to analyse other types of run-in trials, but this depends on the data and type of other trials available. We assume equality of the various carry-over effects over time. The proposed approach is flexible and can be modified to handle other designs. Our results should be encouraging for those responsible for planning cost-efficient clinical development programmes.

  19. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    PubMed

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  20. A rapid estimation of tsunami run-up based on finite fault models

    NASA Astrophysics Data System (ADS)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  1. Designing Crop Simulation Web Service with Service Oriented Architecture Principle

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.

    2015-12-01

    Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.

  2. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  3. Consumer Search, Rationing Rules, and the Consequence for Competition

    NASA Astrophysics Data System (ADS)

    Ruebeck, Christopher S.

    Firms' conjectures about demand are consequential in oligopoly games. Through agent-based modeling of consumers' search for products, we can study the rationing of demand between capacity-constrained firms offering homogeneous products and explore the robustness of analytically solvable models' results. After algorithmically formalizing short-run search behavior rather than assuming a long-run average, this study predicts stronger competition in a two-stage capacity-price game.

  4. Design of ProjectRun21: a 14-week prospective cohort study of the influence of running experience and running pace on running-related injury in half-marathoners.

    PubMed

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard

    2017-11-06

    Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.

  5. Exhaust Emission Rates for Heavy-Duty On road Vehicles in MOVES201X

    EPA Science Inventory

    Updated running exhaust gaseous emission rates (THC, CO, NOx, CO2) for heavy-duty diesel trucks model year 2010 and later based on portable emission measurements from the manufacturer-run, heavy-duty in-use testing (HDIUT) program. Updated cold start emission rates and soak adjus...

  6. Evaluation of bacterial run and tumble motility parameters through trajectory analysis

    NASA Astrophysics Data System (ADS)

    Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash

    2018-04-01

    In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.

  7. Microcomputer pollution model for civilian airports and Air Force bases. Model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segal, H.M.; Hamilton, P.L.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less

  8. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    PubMed

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P < 0.001] and Össur Flex-Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P < 0.001) averaged from both legs. Metabolic cost was reduced with more symmetric peak vertical ground reaction forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry ( P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with increased in-series surface stiffness, biological limb stiffness for athletes with unilateral transtibial amputations is positively correlated with increased in-series (prosthetic) stiffness.

  9. Human and avian running on uneven ground: a model-based comparison

    PubMed Central

    Birn-Jeffery, A. V.; Blum, Y.

    2016-01-01

    Birds and humans are successful bipedal runners, who have individually evolved bipedalism, but the extent of the similarities and differences of their bipedal locomotion is unknown. In turn, the anatomical differences of their locomotor systems complicate direct comparisons. However, a simplifying mechanical model, such as the conservative spring–mass model, can be used to describe both avian and human running and thus, provides a way to compare the locomotor strategies that birds and humans use when running on level and uneven ground. Although humans run with significantly steeper leg angles at touchdown and stiffer legs when compared with cursorial ground birds, swing-leg adaptations (leg angle and leg length kinematics) used by birds and humans while running appear similar across all types of uneven ground. Nevertheless, owing to morphological restrictions, the crouched avian leg has a greater range of leg angle and leg length adaptations when coping with drops and downward steps than the straight human leg. On the other hand, the straight human leg seems to use leg stiffness adaptation when coping with obstacles and upward steps unlike the crouched avian leg posture. PMID:27655670

  10. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  11. Connectionist agent-based learning in bank-run decision making

    NASA Astrophysics Data System (ADS)

    Huang, Weihong; Huang, Qiao

    2018-05-01

    It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.

  12. Design and implementation of fuzzy-PD controller based on relation models: A cross-entropy optimization approach

    NASA Astrophysics Data System (ADS)

    Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh

    2017-07-01

    In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.

  13. Assessing the debris flow run-out frequency of a catchment in the French Alps using a parameterization analysis with the RAMMS numerical run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, H. Y.; Luna, B. Quan; van Westen, C. J.; Christen, M.; Malet, J.-P.; van Asch, Th. W. J.

    2012-04-01

    Debris flows occurring in the European Alps frequently cause significant damage to settlements, power-lines and transportation infrastructure which has led to traffic disruptions, economic loss and even death. Estimating the debris flow run-out extent and the parameter uncertainty related to run-out modeling are some of the difficulties found in the Quantitative Risk Assessment (QRA) of debris flows. Also, the process of the entrainment of material into a debris flow is until now not completely understood. Debris flows observed in the French Alps entrain 5 - 50 times the amount of volume compared to the initially mobilized source volume. In this study we analyze a debris flow that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps). The analysis was carried out using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2D numerical modeling software. The historic event was back calibrated based on source, entrainment and deposit volumes, including the run-out distance, velocities and deposit heights of the debris flow. This was then followed by a sensitivity analysis of the rheological and entrainment parameters to produce 120 debris flow scenarios leading to a frequency assessment of the run-out distance and deposit height at the debris fan. The study shows that the Voellmy frictional parameters mainly influence the run-out distance and velocity of the flow, while the entrainment parameter has a major impact on the debris flow height. The frequency assessment of the 120 simulated scenarios further gives an indication on the most likely debris flow run-out extents and heights for this catchment. Such an assessment can be an important link between the rheological model parameters and the spatial probability of the run-out for the Quantitative Risk Assessment (QRA) of debris flows.

  14. Adapting NEMO for use as the UK operational storm surge forecasting model

    NASA Astrophysics Data System (ADS)

    Furner, Rachel; Williams, Jane; Horsburgh, Kevin; Saulter, Andrew

    2016-04-01

    The United Kingdom is an area vulnerable to damage due to storm surges, particularly the East Coast which suffered losses estimated at over £1 billion during the North Sea surge event of the 5th and 6th December 2013. Accurate forecasting of storm surge events for this region is crucial to enable government agencies to assess the risk of overtopping of coastal defences so they can respond appropriately, minimising risk to life and infrastructure. There has been an operational storm surge forecast service for this region since 1978, using a numerical model developed by the National Oceanography Centre (NOC) and run at the UK Met Office. This is also implemented as part of an ensemble prediction system, using perturbed atmospheric forcing to produce an ensemble surge forecast. In order to ensure efficient use of future supercomputer developments and to create synergy with existing operational coastal ocean models the Met Office and NOC have begun a joint project transitioning the storm surge forecast system from the current CS3X code base to a configuration based on the Nucleus for European Modelling of the Ocean (NEMO). This work involves both adapting NEMO to add functionality, such as allowing the drying out of ocean cells and changes allowing NEMO to run efficiently as a two-dimensional, barotropic model. As the ensemble surge forecast system is run with 12 members 4 times a day computational efficiency is of high importance. Upon completion this project will enable interesting scientific comparisons to be made between a NEMO based surge model and the full three-dimensional baroclinic NEMO based models currently run within the Met Office, facilitating assessment of the impact of baroclinic processes, and vertical resolution on sea surface height forecasts. Moving to a NEMO code base will also allow many future developments to be more easily used within the storm surge model due to the wide range of options which currently exist within NEMO or are planned for future NEMO releases, such as data assimilation, and surge-wave coupling. Assessment of tidal performance of the NEMO-surge configuration and comparison to the existing operational CS3X model has been carried out. Evaluation of the models focus on performance relative to the UK Class A tide gauge network, a dataset which was established following the devastating flood of 1953 and which is managed by the British Oceanographic Data Service (BODC) based at NOC. Trials of the NEMO model in tide-only mode have illustrated the importance of having a well specified bathymetry and, for the 7km scaled model, a secondary sensitivity to bed friction coefficient and the specification of the coastline. Preliminary results will also be presented from model runs with atmospheric (wind stress and pressure at mean sea-level) forcing.

  15. Delay-feedback control strategy for reducing CO2 emission of traffic flow system

    NASA Astrophysics Data System (ADS)

    Zhang, Li-Dong; Zhu, Wen-Xing

    2015-06-01

    To study the signal control strategy for reducing traffic emission theoretically, we first presented a kind of discrete traffic flow model with relative speed term based on traditional coupled map car-following model. In the model, the relative speed difference between two successive running cars is incorporated into following vehicle's acceleration running equation. Then we analyzed its stability condition with discrete control system stability theory. Third, we designed a delay-feedback controller to suppress traffic jam and decrease traffic emission based on modern controller theory. Last, numerical simulations are made to support our theoretical results, including the comparison of models' stability analysis, the influence of model type and signal control on CO2 emissions. The results show that the temporal behavior of our model is superior to other models, and the traffic signal controller has good effect on traffic jam suppression and traffic CO2 emission, which fully supports the theoretical conclusions.

  16. Sexually dimorphic effects of postnatal treatment on the development of activity-based anorexia in adolescent and adult rats.

    PubMed

    Hancock, Stephanie D; Grant, Virginia L

    2009-12-01

    Hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis is a marked feature of anorexia nervosa. Using a modified version of the activity-based animal model of anorexia nervosa, we examine whether factors known to affect HPA axis activity influence the development of activity-based anorexia (ABA). Male and female rats were subjected to maternal separation or handling procedures during the first two postnatal weeks and tested in a mild version of the ABA paradigm, comprised of 2-hr daily running wheel access followed by 1-hr food access, either in adolescence or adulthood. Compared to handled females, maternally separated females demonstrated greater increases in wheel running and a more pronounced running-induced suppression of food intake during adolescence, but not in adulthood. In contrast, it was only in adulthood that wheel running produced more prolonged anorexic effects in maternally separated than in handled males. These findings highlight the interplay between early postnatal treatment, sex of the animal, and developmental age on running, food intake, and rate of body weight loss in a mild version of the ABA paradigm.

  17. Using Runtime Analysis to Guide Model Checking of Java Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.

  18. Multi-Scale Human Respiratory System Simulations to Study Health Effects of Aging, Disease, and Inhaled Substances

    NASA Astrophysics Data System (ADS)

    Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres

    2006-11-01

    Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.

  19. Scalable and Accurate SMT-Based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-31

    accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...be accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of Linux, Mac OS

  20. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    NASA Astrophysics Data System (ADS)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  1. Analysis of Flexible Car Body of Straddle Monorail Vehicle

    NASA Astrophysics Data System (ADS)

    Zhong, Yuanmu

    2018-03-01

    Based on the finite element model of straddle monorail vehicle, a rigid-flexible coupling dynamic model considering vehicle body’s flexibility is established. The influence of vertical stiffness and vertical damping of the running wheel on the modal parameters of the car body is analyzed. The effect of flexible car body on modal parameters and vehicle ride quality is also studied. The results show that when the vertical stiffness of running wheel is less than 1 MN / m, the car body bounce and pitch frequency increase with the increasing of the vertical stiffness of the running wheel, when the running wheel vertical stiffness is 1MN / m or more, car body bounce and pitch frequency remained unchanged; When the vertical stiffness of the running wheel is below 1.8 MN / m, the vehicle body bounce and pitch damping ratio increase with the increasing of the vertical stiffness of the running wheel; When the running wheel vertical stiffness is 1.8MN / m or more, the car body bounce and pitch damping ratio remained unchanged; The running wheel vertical damping on the car body bounce and pitch frequency has no effect; Car body bounce and pitch damping ratio increase with the increasing of the vertical damping of the running wheel. The flexibility of the car body has no effect on the modal parameters of the car, which will improve the vehicle ride quality index.

  2. Experimental evaluation of tool run-out in micro milling

    NASA Astrophysics Data System (ADS)

    Attanasio, Aldo; Ceretti, Elisabetta

    2018-05-01

    This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.

  3. A rat in the labyrinth of anorexia nervosa: contributions of the activity-based anorexia rodent model to the understanding of anorexia nervosa.

    PubMed

    Gutierrez, Emilio

    2013-05-01

    Activity-based anorexia (ABA) is an analogous animal model of anorexia nervosa where food-restricted rats develop excessive running activity when given free access to a running wheel; their body weight sharply decreases, and finally self-starvation and death ensue unless animals are removed from the experimental conditions. The parallel of this animal model with major signs in the human disorder has been the focus of much attention from researchers and clinicians as a platform for translational research. The paper reviews the historical antecedents of ABA, research characterizing its occurrence, and its main limitations and strengths as a model of AN. As a symptomatic model of AN, the ABA model can provide clinicians with innovative and alternative routes for improving the treatment of AN. Copyright © 2013 Wiley Periodicals, Inc.

  4. CalSimHydro Tool - A Web-based interactive tool for the CalSim 3.0 Hydrology Prepropessor

    NASA Astrophysics Data System (ADS)

    Li, P.; Stough, T.; Vu, Q.; Granger, S. L.; Jones, D. J.; Ferreira, I.; Chen, Z.

    2011-12-01

    CalSimHydro, the CalSim 3.0 Hydrology Preprocessor, is an application designed to automate the various steps in the computation of hydrologic inputs for CalSim 3.0, a water resources planning model developed jointly by California State Department of Water Resources and United States Bureau of Reclamation, Mid-Pacific Region. CalSimHydro consists of a five-step FORTRAN based program that runs the individual models in succession passing information from one model to the next and aggregating data as required by each model. The final product of CalSimHydro is an updated CalSim 3.0 state variable (SV) DSS input file. CalSimHydro consists of (1) a Rainfall-Runoff Model to compute monthly infiltration, (2) a Soil moisture and demand calculator (IDC) that estimates surface runoff, deep percolation, and water demands for natural vegetation cover and various crops other than rice, (3) a Rice Water Use Model to compute the water demands, deep percolation, irrigation return flow, and runoff from precipitation for the rice fields, (4) a Refuge Water Use Model that simulates the ponding operations for managed wetlands, and (5) a Data Aggregation and Transfer Module to aggregate the outputs from the above modules and transfer them to the CalSim SV input file. In this presentation, we describe a web-based user interface for CalSimHydro using Google Earth Plug-In. The CalSimHydro tool allows users to - interact with geo-referenced layers of the Water Budget Areas (WBA) and Demand Units (DU) displayed over the Sacramento Valley, - view the input parameters of the hydrology preprocessor for a selected WBA or DU in a time series plot or a tabular form, - edit the values of the input parameters in the table or by downloading a spreadsheet of the selected parameter in a selected time range, - run the CalSimHydro modules in the backend server and notify the user when the job is done, - visualize the model output and compare it with a base run result, - download the output SV file to be used to run CalSim 3.0. The CalSimHydro tool streamlines the complicated steps to configure and run the hydrology preprocessor by providing a user-friendly visual interface and back-end services to validate user inputs and manage the model execution. It is a powerful addition to the new CalSim 3.0 system.

  5. Integrated Planning Model (IPM) Base Case v.4.10

    EPA Pesticide Factsheets

    Learn about EPA's IPM Base Case v.4.10, including Proposed Transport Rule results, documentation, the National Electric Energy Data System (NEEDS) database and user's guide, and run results using previous base cases.

  6. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    PubMed

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  7. Consequence assessment of large rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Hermanns, Reginald L.; Horton, Pascal; Sandøy, Gro; Roberts, Nicholas J.; Jaboyedoff, Michel; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Steep glacially carved valleys and fjords in Norway are prone to many landslide types, including large rockslides, rockfalls, and debris flows. Large rockslides and their secondary effects (rockslide-triggered displacement waves, inundation behind landslide dams and outburst floods from failure of landslide dams) pose a significant hazard to the population living in the valleys and along the fjords shoreline. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected more than 230 unstable slopes with significant postglacial deformation. This large number necessitates prioritisation of follow-up activities, such as more detailed investigations, periodic displacement measurements, continuous monitoring and early-warning systems. Prioritisation is achieved through a hazard and risk classification system, which has been developed by a panel of international and Norwegian experts (www.ngu.no/en-gb/hm/Publications/Reports/2012/2012-029). The risk classification system combines a qualitative hazard assessment with a consequences assessment focusing on potential life losses. The hazard assessment is based on a series of nine geomorphological, engineering geological and structural criteria, as well as displacement rates, past events and other signs of activity. We present a method for consequence assessment comprising four main steps: 1. computation of the volume of the unstable rock slope; 2. run-out assessment based on the volume-dependent angle of reach (Fahrböschung) or detailed numerical run-out modelling; 3. assessment of possible displacement wave propagation and run-up based on empirical relations or modelling in 2D or 3D; and 4. estimation of the number of persons exposed to rock avalanches or displacement waves. Volume computation of an unstable rock slope is based on the sloping local base level technique, which uses a digital elevation model to create a second-order curved surface between the mapped extent of the unstable rock slope. This surface represents the possible basal sliding surface of an unstable rock slope. The elevation difference between this surface and the topographic surface estimates the volume of the unstable rock slope. A tool has been developed for the present study to adapt the curvature parameters of the computed surface to local geological and structural conditions. The obtained volume is then used to define the angle of reach of a possible rock avalanche from the unstable rock slope by using empirical derived values of angle of reach vs. volume relations. Run-out area is calculated using FlowR; the software is widely used for run-out assessment of debris flows and is adapted here for assessment of rock avalanches, including their potential to ascend opposing slopes. Under certain conditions, more sophisticated and complex numerical run-out models are also used. For rock avalanches with potential to reach a fjord or a lake the propagation and run-up area of triggered displacement waves is assessed. Empirical relations of wave run-up height as a function of rock avalanche volume and distance from impact location are derived from a national and international inventory of landslide-triggered displacement waves. These empirical relations are used in first-level hazard assessment and where necessary, followed by 2D or 3D displacement wave modelling. Finally, the population exposed in the rock avalanche run-out area and in the run-up area of a possible displacement wave is assessed taking into account different population groups: inhabitants, persons in critical infrastructure (hospitals and other emergency services), persons in schools and kindergartens, persons at work or in shops, tourists, persons on ferries and so on. Exposure levels are defined for each population group and vulnerability values are set for the rock avalanche run-out area (100%) and the run-up area of a possible displacement wave (70%). Finally, the total number of persons within the hazard area is calculated taking into account exposure and vulnerability. The method for consequence assessment is currently tested through several case studies in Norway and, thereafter, applied to all unstable rock slopes in the country to assess their risk level. Follow-up activities (detailed investigations, periodic displacement measurements or continuous monitoring and early-warning systems) can then be prioritized based on the risk level and with a standard approach for whole Norway.

  8. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  9. Improving the Taiwan Military’s Disaster Relief Response to Typhoons

    DTIC Science & Technology

    2015-06-01

    circulation, are mostly westbound. When they reach the vicinity of Taiwan or the Philippines , which are always at the edge of the Pacific subtropical high...files from the POM base case model, one set for each design point. To automate the process of running all the GAMS files, a Windows batch file ( BAT ...is used to call on GAMS to solve each version of the model. The BAT file creates a new directory for each run to hold output, and one of the outputs

  10. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  11. WEB-BASED MODELING OF A FERTILIZER SOLUTION SPILL IN THE OHIO RIVER

    EPA Science Inventory

    Environmental computer models are usually desktop models. Some web-enabled models are beginning to appear where the user can use a browser to run the models on a central web server. Several issues arise when a desktop model is transferred to a web architecture. This paper discuss...

  12. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  13. Comprehensive Assessment of Models and Events based on Library tools (CAMEL)

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; Boblitt, J. M.; DeZeeuw, D.; Mays, M. L.; Kuznetsova, M. M.; Wiegand, C.

    2017-12-01

    At the Community Coordinated Modeling Center (CCMC), the assessment of modeling skill using a library of model-data comparison metrics is taken to the next level by fully integrating the ability to request a series of runs with the same model parameters for a list of events. The CAMEL framework initiates and runs a series of selected, pre-defined simulation settings for participating models (e.g., WSA-ENLIL, SWMF-SC+IH for the heliosphere, SWMF-GM, OpenGGCM, LFM, GUMICS for the magnetosphere) and performs post-processing using existing tools for a host of different output parameters. The framework compares the resulting time series data with respective observational data and computes a suite of metrics such as Prediction Efficiency, Root Mean Square Error, Probability of Detection, Probability of False Detection, Heidke Skill Score for each model-data pair. The system then plots scores by event and aggregated over all events for all participating models and run settings. We are building on past experiences with model-data comparisons of magnetosphere and ionosphere model outputs in GEM2008, GEM-CEDAR CETI2010 and Operational Space Weather Model challenges (2010-2013). We can apply the framework also to solar-heliosphere as well as radiation belt models. The CAMEL framework takes advantage of model simulations described with Space Physics Archive Search and Extract (SPASE) metadata and a database backend design developed for a next-generation Run-on-Request system at the CCMC.

  14. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  15. Australia's marine virtual laboratory

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe

    2014-05-01

    In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.

  16. Development of an Interactive Computer-Based Learning Strategy to Assist in Teaching Water Quality Modelling

    ERIC Educational Resources Information Center

    Zigic, Sasha; Lemckert, Charles J.

    2007-01-01

    The following paper presents a computer-based learning strategy to assist in introducing and teaching water quality modelling to undergraduate civil engineering students. As part of the learning strategy, an interactive computer-based instructional (CBI) aid was specifically developed to assist students to set up, run and analyse the output from a…

  17. 42 CFR § 512.307 - Subsequent calculations.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.307... the initial NPRA, using claims data and non-claims-based payment data available at that time, to account for final claims run-out, final changes in non-claims-based payment data, and any additional...

  18. Stereological Study on the Positive Effect of Running Exercise on the Capillaries in the Hippocampus in a Depression Model.

    PubMed

    Chen, Linmu; Zhou, Chunni; Tan, Chuanxue; Wang, Feifei; Gao, Yuan; Huang, Chunxia; Zhang, Yi; Jiang, Lin; Tang, Yong

    2017-01-01

    Running exercise is an effective method to improve depressive symptoms when combined with drugs. However, the underlying mechanisms are not fully clear. Cerebral blood flow perfusion in depressed patients is significantly lower in the hippocampus. Physical activity can achieve cerebrovascular benefits. The purpose of this study was to evaluate the impacts of running exercise on capillaries in the hippocampal CA1 and dentate gyrus (DG) regions. The chronic unpredictable stress (CUS) depression model was used in this study. CUS rats were given 4 weeks of running exercise from the fifth week to the eighth week (20 min every day from Monday to Friday each week). The sucrose consumption test was used to measure anhedonia. Furthermore, stereological methods were used to investigate the capillary changes among the control group, CUS/Standard group and CUS/Running group. Sucrose consumption significantly increased in the CUS/Running group. Running exercise has positive effects on the capillaries parameters in the hippocampal CA1 and DG regions, such as the total volume, total length and total surface area. These results demonstrated that capillaries are protected by running exercise in the hippocampal CA1 and DG might be one of the structural bases for the exercise-induced treatment of depression-like behavior. These results suggest that drugs and behavior influence capillaries and may be considered as a new means for depression treatment in the future.

  19. Running exercise protects the capillaries in white matter in a rat model of depression.

    PubMed

    Chen, Lin-Mu; Zhang, Ai-Pin; Wang, Fei-Fei; Tan, Chuan-Xue; Gao, Yuan; Huang, Chun-Xia; Zhang, Yi; Jiang, Lin; Zhou, Chun-Ni; Chao, Feng-Lei; Zhang, Lei; Tang, Yong

    2016-12-01

    Running has been shown to improve depressive symptoms when used as an adjunct to medication. However, the mechanisms underlying the antidepressant effects of running are not fully understood. Changes of capillaries in white matter have been discovered in clinical patients and depression model rats. Considering the important part of white matter in depression, running may cause capillary structural changes in white matter. Chronic unpredictable stress (CUS) rats were provided with a 4-week running exercise (from the fifth week to the eighth week) for 20 minutes each day for 5 consecutive days each week. Anhedonia was measured by a behavior test. Furthermore, capillary changes were investigated in the control group, the CUS/Standard group, and the CUS/Running group using stereological methods. The 4-week running increased sucrose consumption significantly in the CUS/Running group and had significant effects on the total volume, total length, and total surface area of the capillaries in the white matter of depression rats. These results demonstrated that exercise-induced protection of the capillaries in white matter might be one of the structural bases for the exercise-induced treatment of depression. It might provide important parameters for further study of the vascular mechanisms of depression and a new research direction for the development of clinical antidepressant means. J. Comp. Neurol. 524:3577-3586, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Novel application of red-light runner proneness theory within traffic microsimulation to an actual signal junction.

    PubMed

    Bell, Margaret Carol; Galatioto, Fabio; Giuffrè, Tullio; Tesoriere, Giovanni

    2012-05-01

    Building on previous research a conceptual framework, based on potential conflicts analysis, has provided a quantitative evaluation of 'proneness' to red-light running behaviour at urban signalised intersections of different geometric, flow and driver characteristics. The results provided evidence that commonly used violation rates could cause inappropriate evaluation of the extent of the red-light running phenomenon. Initially, an in-depth investigation of the functional form of the mathematical relationship between the potential and actual red-light runners was carried out. The application of the conceptual framework was tested on a signalised intersection in order to quantify the proneness to red-light running. For the particular junction studied proneness for daytime was found to be 0.17 north and 0.16 south for opposing main road approaches and 0.42 east and 0.59 west for the secondary approaches. Further investigations were carried out using a traffic microsimulation model, to explore those geometric features and traffic volumes (arrival patterns at the stop-line) that significantly affect red-light running. In this way the prediction capability of the proposed potential conflict model was improved. A degree of consistency in the measured and simulated red-light running was observed and the conceptual framework was tested through a sensitivity analysis applied to different stop-line positions and traffic volume variations. The microsimulation, although at its early stages of development, has shown promise in its ability to model unintentional red light running behaviour and following further work through application to other junctions, potentially provides a tool for evaluation of alternative junction designs on proneness. In brief, this paper proposes and applies a novel approach to model red-light running using a microsimulation and demonstrates consistency with the observed and theoretical results. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Treesearch

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  2. Joining up with "Not Us" Staff to Run Adolescent Groups in Schools

    ERIC Educational Resources Information Center

    Sayder, Suzan

    2008-01-01

    The author describes the development of a model for working with staff members from non-psychoanalytic backgrounds to run therapeutic and therapy-like pupil groups in schools. She draws on her experience of co-facilitating groups at a London-based secondary school and uses examples from recent group work with Year 10 pupils (aged 14-15). Child…

  3. Arabic Information Retrieval at UMass in TREC-10

    DTIC Science & Technology

    2006-01-01

    electronic bilingual dictionaries , and stemmers, and our unfamiliarity with Arabic, we had our hands full carrying out some standard approaches to... monolingual and cross-lan- guage Arabic retrieval, and did not submit any runs based on novel approaches. We submitted three monolingual runs and one... dictionary construction, expanded Arabic queries, improved estimation and smoothing in language models, and added combination of evidence, increasing

  4. Measuring joint kinematics of treadmill walking and running: Comparison between an inertial sensor based system and a camera-based system.

    PubMed

    Nüesch, Corina; Roos, Elena; Pagenstert, Geert; Mündermann, Annegret

    2017-05-24

    Inertial sensor systems are becoming increasingly popular for gait analysis because their use is simple and time efficient. This study aimed to compare joint kinematics measured by the inertial sensor system RehaGait® with those of an optoelectronic system (Vicon®) for treadmill walking and running. Additionally, the test re-test repeatability of kinematic waveforms and discrete parameters for the RehaGait® was investigated. Twenty healthy runners participated in this study. Inertial sensors and reflective markers (PlugIn Gait) were attached according to respective guidelines. The two systems were started manually at the same time. Twenty consecutive strides for walking and running were recorded and each software calculated sagittal plane ankle, knee and hip kinematics. Measurements were repeated after 20min. Ensemble means were analyzed calculating coefficients of multiple correlation for waveforms and root mean square errors (RMSE) for waveforms and discrete parameters. After correcting the offset between waveforms, the two systems/models showed good agreement with coefficients of multiple correlation above 0.950 for walking and running. RMSE of the waveforms were below 5° for walking and below 8° for running. RMSE for ranges of motion were between 4° and 9° for walking and running. Repeatability analysis of waveforms showed very good to excellent coefficients of multiple correlation (>0.937) and RMSE of 3° for walking and 3-7° for running. These results indicate that in healthy subjects sagittal plane joint kinematics measured with the RehaGait® are comparable to those using a Vicon® system/model and that the measured kinematics have a good repeatability, especially for walking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  6. Forecasting the impact of storm waves and sea-level rise on Midway Atoll and Laysan Island within the Papahānaumokuākea Marine National Monument—a comparison of passive versus dynamic inundation models

    USGS Publications Warehouse

    Storlazzi, Curt D.; Berkowitz, Paul; Reynolds, Michelle H.; Logan, Joshua B.

    2013-01-01

    Two inundation events in 2011 underscored the potential for elevated water levels to damage infrastructure and affect terrestrial ecosystems on the low-lying Northwestern Hawaiian Islands in the Papahānaumokuākea Marine National Monument. The goal of this study was to compare passive "bathtub" inundation models based on geographic information systems (GIS) to those that include dynamic water levels caused by wave-induced set-up and run-up for two end-member island morphologies: Midway, a classic atoll with islands on the shallow (2-8 m) atoll rim and a deep, central lagoon; and Laysan, which is characterized by a deep (20-30 m) atoll rim and an island at the center of the atoll. Vulnerability to elevated water levels was assessed using hindcast wind and wave data to drive coupled physics-based numerical wave, current, and water-level models for the atolls. The resulting model data were then used to compute run-up elevations using a parametric run-up equation under both present conditions and future sea-level-rise scenarios. In both geomorphologies, wave heights and wavelengths adjacent to the island shorelines increased more than three times and four times, respectively, with increasing values of sea-level rise, as more deep-water wave energy could propagate over the atoll rim and larger wind-driven waves could develop on the atoll. Although these increases in water depth resulted in decreased set-up along the islands’ shorelines, the larger wave heights and longer wavelengths due to sea-level rise increased the resulting wave-induced run-up. Run-up values were spatially heterogeneous and dependent on the direction of incident wave direction, bathymetry, and island configuration. Island inundation was modeled to increase substantially when wave-driven effects were included, suggesting that inundation and impacts to infrastructure and terrestrial habitats will occur at lower values of predicted sea-level rise, and thus sooner in the 21st century, than suggested by passive GIS-based "bathtub" inundation models. Lastly, observations and the modeling results suggest that classic atolls with islands on a shallow atoll rim are more susceptible to the combined effects of sea-level rise and wave-driven inundation than atolls characterized by a deep atoll rim.

  7. Impact of ship emissions on air pollution and AOD over North Atlantic and European Arctic

    NASA Astrophysics Data System (ADS)

    Kaminski, Jacek W.; Struzewska, Joanna; Jefimow, Maciej; Durka, Pawel

    2016-04-01

    The iAREA project is combined of experimental and theoretical research in order to contribute to the new knowledge on the impact of absorbing aerosols on the climate system in the European Arctic (http://www.igf.fuw.edu.pl/iAREA). A tropospheric chemistry model GEM-AQ (Global Environmental Multiscale Air Quality) was used as a computational tool. The core of the model is based on a weather prediction model with environmental processes (chemistry and aerosols) implanted on-line and are interactive (i.e. providing feedback of chemistry on radiation and dynamics). The numerical grid covered the Euro-Atlantic region with the resolution of 50 km. Emissions developed by NILU in the ECLIPSE project was used (Klimont et al., 2013). The model was run for two 1-year scenarios. 2014 was chosen as a base year for simulations and analysis. Scenarios include a base run with most up-to-date emissions and a run without maritime emissions. The analysis will focus on the contribution of maritime emissions on levels of particulate matter and gaseous pollutants over the European Arctic, North Atlantic and coastal areas. The annual variability will be assessed based on monthly mean near-surface concentration fields. Analysis of shipping transport on near-surface air pollution over the Euro-Atlantic region will be assessed for ozone, NO2, SO2, CO, PM10, PM2.5. Also, a contribution of ship emissions to AOD will be analysed.

  8. Configuring the HYSPLIT Model for National Weather Service Forecast Office and Spaceflight Meteorology Group Applications

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph G.

    2009-01-01

    For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.

  9. Examining the impact of larval source management and insecticide-treated nets using a spatial agent-based model of Anopheles gambiae and a landscape generator tool.

    PubMed

    Arifin, S M Niaz; Madey, Gregory R; Collins, Frank H

    2013-08-21

    Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs' mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended.

  10. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    USDA-ARS?s Scientific Manuscript database

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  11. Deep learning based state recognition of substation switches

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2018-06-01

    Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.

  12. Factors That Influence Running Intensity in Interchange Players in Professional Rugby League.

    PubMed

    Delaney, Jace A; Thornton, Heidi R; Duthie, Grant M; Dascombe, Ben J

    2016-11-01

    Rugby league coaches adopt replacement strategies for their interchange players to maximize running intensity; however, it is important to understand the factors that may influence match performance. To assess the independent factors affecting running intensity sustained by interchange players during professional rugby league. Global positioning system (GPS) data were collected from all interchanged players (starters and nonstarters) in a professional rugby league squad across 24 matches of a National Rugby League season. A multilevel mixed-model approach was employed to establish the effect of various technical (attacking and defensive involvements), temporal (bout duration, time in possession, etc), and situational (season phase, recovery cycle, etc) factors on the relative distance covered and average metabolic power (P met ) during competition. Significant effects were standardized using correlation coefficients, and the likelihood of the effect was described using magnitude-based inferences. Superior intermittent running ability resulted in very likely large increases in both relative distance and P met . As the length of a bout increased, both measures of running intensity exhibited a small decrease. There were at least likely small increases in running intensity for matches played after short recovery cycles and against strong opposition. During a bout, the number of collision-based involvements increased running intensity, whereas time in possession and ball time out of play decreased demands. These data demonstrate a complex interaction of individual- and match-based factors that require consideration when developing interchange strategies, and the manipulation of training loads during shorter recovery periods and against stronger opponents may be beneficial.

  13. Composite dark energy: Cosmon models with running cosmological term and gravitational coupling

    NASA Astrophysics Data System (ADS)

    Grande, Javier; Solà, Joan; Štefančić, Hrvoje

    2007-02-01

    In the recent literature on dark energy (DE) model building we have learnt that cosmologies with variable cosmological parameters can mimic more traditional DE pictures exclusively based on scalar fields (e.g. quintessence and phantom). In a previous work we have illustrated this situation within the context of a renormalization group running cosmological term, Λ. Here we analyze the possibility that both the cosmological term and the gravitational coupling, G, are running parameters within a more general framework (a variant of the so-called “ΛXCDM models”) in which the DE fluid can be a mixture of a running Λ and another dynamical entity X (the “cosmon”) which may behave quintessence-like or phantom-like. We compute the effective EOS parameter, ω, of this composite fluid and show that the ΛXCDM can mimic to a large extent the standard ΛCDM model while retaining features hinting at its potential composite nature (such as the smooth crossing of the cosmological constant boundary ω=-1). We further argue that the ΛXCDM models can cure the cosmological coincidence problem. All in all we suggest that future experimental studies on precision cosmology should take seriously the possibility that the DE fluid can be a composite medium whose dynamical features are partially caused and renormalized by the quantum running of the cosmological parameters.

  14. Cumulative impact of developments on the surrounding roadways' traffic.

    DOT National Transportation Integrated Search

    2011-10-01

    "In order to recommend a procedure for cumulative impact study, four different travel : demand models were developed, calibrated, and validated. The base year for the models was 2005. : Two study areas were used, and the models were run for three per...

  15. Testing for the validity of purchasing power parity theory both in the long-run and the short-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-11-01

    The purchasing power parity theory says that the trade rates among two nations ought to be equivalent to the proportion of the total price levels between the two nations. For more than a decade, there has been substantial interest in testing for the validity of the Purchasing Power Parity (PPP) empirically. This paper performs a series of tests to see if PPP is valid for ASEAN-5 nations for the period of 2000-2016 using monthly data. For this purpose, we conducted four different tests of stationarity, two cointegration tests (Pedroni and Westerlund), and also the VAR model. The stationarity (unit root) tests reveal that the variables are not stationary at levels however stationary at first difference. Cointegration test results did not reject the H0 of no cointegration implying the absence long-run association among the variables and results of the VAR model did not reveal a strong short-run relationship. Based on the data, we, therefore, conclude that PPP is not valid in long-and short-run for ASEAN-5 during 2000-2016.

  16. CMAQ predictions of tropospheric ozone in the U.S. southwest: influence of lateral boundary and synoptic conditions.

    PubMed

    Shi, Chune; Fernando, H J S; Hyde, Peter

    2012-02-01

    Phoenix, Arizona, has been an ozone nonattainment area for the past several years and it remains so. Mitigation strategies call for improved modeling methodologies as well as understanding of ozone formation and destruction mechanisms during seasons of high ozone events. To this end, the efficacy of lateral boundary conditions (LBCs) based on satellite measurements (adjusted-LBCs) was investigated, vis-à-vis the default-LBCs, for improving the predictions of Models-3/CMAQ photochemical air quality modeling system. The model evaluations were conducted using hourly ground-level ozone and NO(2) concentrations as well as tropospheric NO(2) columns and ozone concentrations in the middle to upper troposphere, with the 'design' periods being June and July of 2006. Both included high ozone episodes, but the June (pre-monsoon) period was characterized by local thermal circulation whereas the July (monsoon) period by synoptic influence. Overall, improved simulations were noted for adjusted-LBC runs for ozone concentrations both at the ground-level and in the middle to upper troposphere, based on EPA-recommended model performance metrics. The probability of detection (POD) of ozone exceedances (>75ppb, 8-h averages) for the entire domain increased from 20.8% for the default-LBC run to 33.7% for the adjusted-LBC run. A process analysis of modeling results revealed that ozone within PBL during bulk of the pre-monsoon season is contributed by local photochemistry and vertical advection, while the contributions of horizontal and vertical advections are comparable in the monsoon season. The process analysis with adjusted-LBC runs confirms the contributions of vertical advection to episodic high ozone days, and hence elucidates the importance of improving predictability of upper levels with improved LBCs. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Run-up Variability due to Source Effects

    NASA Astrophysics Data System (ADS)

    Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.

  18. Preferred gait and walk-run transition speeds in ostriches measured using GPS-IMU sensors.

    PubMed

    Daley, Monica A; Channon, Anthony J; Nolan, Grant S; Hall, Jade

    2016-10-15

    The ostrich (Struthio camelus) is widely appreciated as a fast and agile bipedal athlete, and is a useful comparative bipedal model for human locomotion. Here, we used GPS-IMU sensors to measure naturally selected gait dynamics of ostriches roaming freely over a wide range of speeds in an open field and developed a quantitative method for distinguishing walking and running using accelerometry. We compared freely selected gait-speed distributions with previous laboratory measures of gait dynamics and energetics. We also measured the walk-run and run-walk transition speeds and compared them with those reported for humans. We found that ostriches prefer to walk remarkably slowly, with a narrow walking speed distribution consistent with minimizing cost of transport (CoT) according to a rigid-legged walking model. The dimensionless speeds of the walk-run and run-walk transitions are slower than those observed in humans. Unlike humans, ostriches transition to a run well below the mechanical limit necessitating an aerial phase, as predicted by a compass-gait walking model. When running, ostriches use a broad speed distribution, consistent with previous observations that ostriches are relatively economical runners and have a flat curve for CoT against speed. In contrast, horses exhibit U-shaped curves for CoT against speed, with a narrow speed range within each gait for minimizing CoT. Overall, the gait dynamics of ostriches moving freely over natural terrain are consistent with previous lab-based measures of locomotion. Nonetheless, ostriches, like humans, exhibit a gait-transition hysteresis that is not explained by steady-state locomotor dynamics and energetics. Further study is required to understand the dynamics of gait transitions. © 2016. Published by The Company of Biologists Ltd.

  19. Numerical analysis of the transportation characteristics of a self-running sliding stage based on near-field acoustic levitation.

    PubMed

    Feng, Kai; Liu, Yuanyuan; Cheng, Miaomiao

    2015-12-01

    Owing to its distinct non-contact and oil-free characteristics, a self-running sliding stage based on near-field acoustic levitation can be used in an environment, which demands clean rooms and zero noise. This paper presents a numerical analysis on the lifting and transportation capacity of a non-contact transportation system. Two simplified structure models, namely, free vibration and force vibration models, are proposed for the study of the displacement amplitude distribution of two cases using the finite element method. After coupling the stage displacement into the film thickness, the Reynolds equation is solved by the finite difference method to obtain the lifting and thrusting forces. Parametric analyses of the effects of amplitude, frequency, and standing wave ratio (SWR) on the sliding stage dynamic performance are investigated. Numerical results show good agreement with published experimental values. The predictions also reveal that greater transportation capacity of the self-running sliding stage is generally achieved at less SWR and at higher amplitude.

  20. Workstation-Based Real-Time Mesoscale Modeling Designed for Weather Support to Operations at the Kennedy Space Center and Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Zack, John W.; Taylor, Gregory E.

    1996-01-01

    This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.

  1. Sensitivity study of a dynamic thermodynamic sea ice model

    NASA Astrophysics Data System (ADS)

    Holland, David M.; Mysak, Lawrence A.; Manak, Davinder K.; Oberhuber, Josef M.

    1993-02-01

    A numerical simulation of the seasonal sea ice cover in the Arctic Ocean and the Greenland, Iceland, and Norwegian seas is presented. The sea ice model is extracted from Oberhuber's (1990) coupled sea ice-mixed layer-isopycnal general circulation model and is written in spherical coordinates. The advantage of such a model over previous sea ice models is that it can be easily coupled to either global atmospheric or ocean general circulation models written in spherical coordinates. In this model, the thermodynamics are a modification of that of Parkinson and Washington (1979), while the dynamics use the full Hibler (1979) viscous-plastic rheology. Monthly thermodynamic and dynamic forcing fields for the atmosphere and ocean are specified. The simulations of the seasonal cycle of ice thickness, compactness, and velocity, for a control set of parameters, compare favorably with the known seasonal characteristics of these fields. A sensitivity study of the control simulation of the seasonal sea ice cover is presented. The sensitivity runs are carried out under three different themes, namely, numerical conditions, parameter values, and physical processes. This last theme refers to experiments in which physical processes are either newly added or completely removed from the model. Approximately 80 sensitivity runs have been performed in which a change from the control run environment has been implemented. Comparisons have been made between the control run and a particular sensitivity run based on time series of the seasonal cycle of the domain-averaged ice thickness, compactness, areal coverage, and kinetic energy. In addition, spatially varying fields of ice thickness, compactness, velocity, and surface temperature for each season are presented for selected experiments. A brief description and discussion of the more interesting experiments are presented. The simulation of the seasonal cycle of Arctic sea ice cover is shown to be robust.

  2. U.S. Patent Pending, Information Security Analysis Using Game Theory and Simulation, U.S. Patent Application No.: 14/097,840

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less

  3. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Giving students the run of sprinting models

    NASA Astrophysics Data System (ADS)

    Heck, André; Ellermeijer, Ton

    2009-11-01

    A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.

  5. Bio-Optical Data Assimilation With Observational Error Covariance Derived From an Ensemble of Satellite Images

    NASA Astrophysics Data System (ADS)

    Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter

    2018-03-01

    An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.

  6. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    PubMed

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  7. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  8. Essays on Commodity Prices and Macroeconomic Performance of Developing and Resources Rich Economies: Evidence from Kazakhstan

    NASA Astrophysics Data System (ADS)

    Bilgin, Ferhat I.

    My dissertation consists of three essays in empirical macroeconomics. The objective of this research is to use rigorous time-series econometric analysis to investigate the impact of commodity prices on macroeconomic performance of a small, developing and resource-rich country, which is in the process of transition from a purely command and control economy to a market oriented one. Essay 1 studies the relationship between Kazakhstan's GDP, total government expenditure, real effective exchange rate and the world oil price. Specifically, I use the cointegrated vector autoregression (CVAR) and error correction modeling (ECM) approach to identify the long and short-run relations that may exist among these macroeconomic variables. I found a long-run relationship for Kazakhstan's GDP, which depends on government spending and the oil price positively, and on the real effective exchange rate negatively. In the short run, the growth rate of GDP depends on the growth rates of the oil price, investment and the magnitude of the deviation from the long-run equilibrium. Essay 2 studies the inflation process in Kazakhstan based on the analysis of price formation in the following sectors: monetary, external, labor and goods and services. The modeling is conducted from two different perspectives: the first is the monetary model of inflation framework and the second is the mark-up modeling framework. Encompassing test results show that the mark-up model performs better than the monetary model in explaining inflation in Kazakhstan. According to the mark-up inflation model, in the long run, the price level is positively related to unit labor costs, import prices and government administered prices as well the world oil prices. In the short run, the inflation is positively influenced by the previous quarter's inflation, the contemporaneous changes in the government administered prices, oil prices and by the changes of contemporaneous and lagged unit labor costs, and negatively affected by the previous quarter's mark-up. Essay 3 empirically examines the determinants of the trade balance for a small oil exporting country within the context of Kazakhstan. The dominant theory by Harberger-Lauren-Metzler (HML) predicts that positive terms of trade shocks will improve the trade balance in the short run, but will fade away in the long run. I estimate cointegrated vector autoregression (CVAR) and vector error correction model (VECM) to study the long-run and short-run impacts on the trade balance. The results suggest that, in the long run, an increase in the terms of trade has a positive effect on the trade balance, an increase in GDP and appreciation of the real effective exchange rate have negative effect on the trade balance. In the short run, the terms of trade has a direct positive impact on the trade balance, real income and real exchange rate. On the other hand, appreciation of the currency has a negative impact on the trade balance. The error correction term, which represents the deviation from the long- run equilibrium between the trade balance, real income, terms of trade and real exchange rate, has a negative effect on the growth rate of the trade balance. These results provide further evidence to the idea that, in the long run, the HML effect not only depends on the duration of the shock, but also depends on the structure of the economy.

  9. A brief opportunity to run does not function as a reinforcer for mice selected for high daily wheel-running rates.

    PubMed

    Belke, Terry W; Garland, Theodore

    2007-09-01

    Mice from replicate lines, selectively bred based on high daily wheel-running rates, run more total revolutions and at higher average speeds than do mice from nonselected control lines. Based on this difference it was assumed that selected mice would find the opportunity to run in a wheel a more efficacious consequence. To assess this assumption within an operant paradigm, mice must be trained to make a response to produce the opportunity to run as a consequence. In the present study an autoshaping procedure was used to compare the acquisition of lever pressing reinforced by the opportunity to run for a brief opportunity (i.e., 90 s) between selected and control mice and then, using an operant procedure, the effect of the duration of the opportunity to run on lever pressing was assessed by varying reinforcer duration over values of 90 s, 30 min, and 90 s. The reinforcement schedule was a ratio schedule (FR 1 or VR 3). Results from the autoshaping phase showed that more control mice met a criterion of responses on 50% of trials. During the operant phase, when reinforcer duration was 90 s, almost all control, but few selected mice completed a session of 20 reinforcers; however, when reinforcer duration was increased to 30 min almost all selected and control mice completed a session of 20 reinforcers. Taken together, these results suggest that selective breeding based on wheel-running rates over 24 hr may have altered the motivational system in a way that reduces the reinforcing value of shorter running durations. The implications of this finding for these mice as a model for attention deficit hyperactivity disorder (ADHD) are discussed. It also is proposed that there may be an inherent trade-off in the motivational system for activities of short versus long duration.

  10. A Brief Opportunity to Run Does Not Function as a Reinforcer for Mice Selected for High Daily Wheel-running Rates

    PubMed Central

    Belke, Terry W; GarlandJr, Theodore

    2007-01-01

    Mice from replicate lines, selectively bred based on high daily wheel-running rates, run more total revolutions and at higher average speeds than do mice from nonselected control lines. Based on this difference it was assumed that selected mice would find the opportunity to run in a wheel a more efficacious consequence. To assess this assumption within an operant paradigm, mice must be trained to make a response to produce the opportunity to run as a consequence. In the present study an autoshaping procedure was used to compare the acquisition of lever pressing reinforced by the opportunity to run for a brief opportunity (i.e., 90 s) between selected and control mice and then, using an operant procedure, the effect of the duration of the opportunity to run on lever pressing was assessed by varying reinforcer duration over values of 90 s, 30 min, and 90 s. The reinforcement schedule was a ratio schedule (FR 1 or VR 3). Results from the autoshaping phase showed that more control mice met a criterion of responses on 50% of trials. During the operant phase, when reinforcer duration was 90 s, almost all control, but few selected mice completed a session of 20 reinforcers; however, when reinforcer duration was increased to 30 min almost all selected and control mice completed a session of 20 reinforcers. Taken together, these results suggest that selective breeding based on wheel-running rates over 24 hr may have altered the motivational system in a way that reduces the reinforcing value of shorter running durations. The implications of this finding for these mice as a model for attention deficit hyperactivity disorder (ADHD) are discussed. It also is proposed that there may be an inherent trade-off in the motivational system for activities of short versus long duration. PMID:17970415

  11. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  12. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  13. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    2016-04-22

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  14. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  15. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  16. An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown

    PubMed Central

    Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P.

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results. PMID:25821853

  17. An extended EPQ-based problem with a discontinuous delivery policy, scrap rate, and random breakdown.

    PubMed

    Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  18. Exploring storage and runoff generation processes for urban flooding through a physically based watershed model

    NASA Astrophysics Data System (ADS)

    Smith, B. K.; Smith, J. A.; Baeck, M. L.; Miller, A. J.

    2015-03-01

    A physically based model of the 14 km2 Dead Run watershed in Baltimore County, MD was created to test the impacts of detention basin storage and soil storage on the hydrologic response of a small urban watershed during flood events. The Dead Run model was created using the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) algorithms and validated using U.S. Geological Survey stream gaging observations for the Dead Run watershed and 5 subbasins over the largest 21 warm season flood events during 2008-2012. Removal of the model detention basins resulted in a median peak discharge increase of 11% and a detention efficiency of 0.5, which was defined as the percent decrease in peak discharge divided by percent detention controlled area. Detention efficiencies generally decreased with increasing basin size. We tested the efficiency of detention basin networks by focusing on the "drainage network order," akin to the stream order but including storm drains, streams, and culverts. The detention efficiency increased dramatically between first-order detention and second-order detention but was similar for second and third-order detention scenarios. Removal of the soil compacted layer, a common feature in urban soils, resulted in a 7% decrease in flood peak discharges. This decrease was statistically similar to the flood peak decrease caused by existing detention. Current soil storage within the Dead Run watershed decreased flood peak discharges by a median of 60%. Numerical experiment results suggested that detention basin storage and increased soil storage have the potential to substantially decrease flood peak discharges.

  19. A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model

    NASA Astrophysics Data System (ADS)

    Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang

    2017-05-01

    There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.

  20. Introducing students to ocean modeling via a web-based implementation for the Regional Ocean Modeling System (ROMS) river plume case study

    NASA Astrophysics Data System (ADS)

    Harris, C. K.; Overeem, I.; Hutton, E.; Moriarty, J.; Wiberg, P.

    2016-12-01

    Numerical models are increasingly used for both research and applied sciences, and it is important that we train students to run models and analyze model data. This is especially true within oceanographic sciences, many of which use hydrodynamic models to address oceanographic transport problems. These models, however, often require a fair amount of training and computer skills before a student can run the models and analyze the large data sets produced by the models. One example is the Regional Ocean Modeling System (ROMS), an open source, three-dimensional primitive equation hydrodynamic ocean model that uses a structured curvilinear horizontal grid. It currently has thousands of users worldwide, and the full model includes modules for sediment transport and biogeochemistry, and several options for turbulence closures and numerical schemes. Implementing ROMS can be challenging to students, however, in part because the code was designed to provide flexibility for the choice of model parameterizations and processes, and to run on a variety of High Performance Computing (HPC) platforms. To provide a more accessible tool for classroom use, we have modified an existing idealized ROMS implementation to be run on a High Performance Computer (HPC) via the WMT (Web Modeling Toolkit), and developed a series of lesson plans that explore sediment transport within the idealized model domain. This has addressed our goal to provide a relatively easy introduction to the numerical modeling process that can be used within upper level undergraduate and graduate classes to explore sediment transport on continental shelves. The model implementation includes wave forcing, along-shelf currents, a riverine source, and suspended sediment transport. The model calculates suspended transport and deposition of sediment delivered to the continental shelf by a riverine flood. Lesson plans lead the students through running the model on a remote HPC, modifying the standard model. The lesson plans also include instruction for visualizing the model output within Matlab and Panoply. The lesson plans have been used within graduate, undergraduate classrooms, as well as in clinics aimed at educators. Feedback from these exercises has been used to improve the lesson plans and model implementation.

  1. Design of an ultraprecision computerized numerical control chemical mechanical polishing machine and its implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Chupeng; Zhao, Huiying; Zhu, Xueliang; Zhao, Shijie; Jiang, Chunye

    2018-01-01

    The chemical mechanical polishing (CMP) is a key process during the machining route of plane optics. To improve the polishing efficiency and accuracy, a CMP model and machine tool were developed. Based on the Preston equation and the axial run-out error measurement results of the m circles on the tin plate, a CMP model that could simulate the material removal at any point on the workpiece was presented. An analysis of the model indicated that lower axial run-out error led to lower material removal but better polishing efficiency and accuracy. Based on this conclusion, the CMP machine was designed, and the ultraprecision gas hydrostatic guideway and rotary table as well as the Siemens 840Dsl numerical control system were incorporated in the CMP machine. To verify the design principles of machine, a series of detection and machining experiments were conducted. The LK-G5000 laser sensor was employed for detecting the straightness error of the gas hydrostatic guideway and the axial run-out error of the gas hydrostatic rotary table. A 300-mm-diameter optic was chosen for the surface profile machining experiments performed to determine the CMP efficiency and accuracy.

  2. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    NASA Astrophysics Data System (ADS)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  3. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    PubMed

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system-wide opportunities for the implementation of sustainable RRI prevention interventions. This 'big picture' perspective represents the first step required when thinking about the range of contributory causal factors that affect other system elements, as well as runners' behaviours in relation to RRI risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration

    NASA Astrophysics Data System (ADS)

    Wells, B.; Toniolo, H. A.; Stuefer, S. L.

    2015-12-01

    Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.

  6. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  7. Removal of power line interference of space bearing vibration signal based on the morphological filter and blind source separation

    NASA Astrophysics Data System (ADS)

    Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping

    2017-06-01

    Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.

  8. High resolution modelling of wind fields for optimization of empirical storm flood predictions

    NASA Astrophysics Data System (ADS)

    Brecht, B.; Frank, H.

    2014-05-01

    High resolution wind fields are necessary to predict the occurrence of storm flood events and their magnitude. Deutscher Wetterdienst (DWD) created a catalogue of detailed wind fields of 39 historical storms at the German North Sea coast from the years 1962 to 2011. The catalogue is used by the Niedersächsisches Landesamt für Wasser-, Küsten- und Naturschutz (NLWKN) coastal research center to improve their flood alert service. The computation of wind fields and other meteorological parameters is based on the model chain of the DWD going from the global model GME via the limited-area model COSMO with 7 km mesh size down to a COSMO model with 2.2 km. To obtain an improved analysis COSMO runs are nudged against observations for the historical storms. The global model GME is initialised from the ERA reanalysis data of the European Centre for Medium-Range Weather Forecasts (ECMWF). As expected, we got better congruency with observations of the model for the nudging runs than the normal forecast runs for most storms. We also found during the verification process that different land use data sets could influence the results considerably.

  9. Running with horizontal pulling forces: the benefits of towing.

    PubMed

    Grabowski, Alena M; Kram, Rodger

    2008-10-01

    Towing, or running with a horizontal pulling force, is a common technique used by adventure racing teams. During an adventure race, the slowest person on a team determines the team's overall performance. To improve overall performance, a faster runner tows a slower runner with an elastic cord attached to their waists. Our purpose was to create and validate a model that predicts the optimal towing force needed by two runners to achieve their best overall performance. We modeled the effects of towing forces between two runners that differ in solo 10-km performance time and/or body mass. We calculated the overall time that could be saved with towing for running distances of 10, 20, and 42.2-km based on equations from previous research. Then, we empirically tested our 10-km model on 15 runners. Towing improved overall running performance considerably and our model accurately predicted this performance improvement. For example, if two runners (a 70 kg runner with a 35 min solo 10-km time and a 70-kg runner with a 50-min solo 10-km time) maintain an optimal towing force throughout a 10-km race, they can improve overall performance by 15%, saving almost 8 min. Ultimately, the race performance time and body mass of each runner determine the optimal towing force.

  10. Structure-seeking multilinear methods for the analysis of fMRI data.

    PubMed

    Andersen, Anders H; Rayens, William S

    2004-06-01

    In comprehensive fMRI studies of brain function, the data structures often contain higher-order ways such as trial, task condition, subject, and group in addition to the intrinsic dimensions of time and space. While multivariate bilinear methods such as principal component analysis (PCA) have been used successfully for extracting information about spatial and temporal features in data from a single fMRI run, the need to unfold higher-order data sets into bilinear arrays has led to decompositions that are nonunique and to the loss of multiway linkages and interactions present in the data. These additional dimensions or ways can be retained in multilinear models to produce structures that are unique and which admit interpretations that are neurophysiologically meaningful. Multiway analysis of fMRI data from multiple runs of a bilateral finger-tapping paradigm was performed using the parallel factor (PARAFAC) model. A trilinear model was fitted to a data cube of dimensions voxels by time by run. Similarly, a quadrilinear model was fitted to a higher-way structure of dimensions voxels by time by trial by run. The spatial and temporal response components were extracted and validated by comparison to results from traditional SVD/PCA analyses based on scenarios of unfolding into lower-order bilinear structures.

  11. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  12. To run or not to run: a post-meniscectomy qualitative risk analysis model for osteoarthritis when considering a return to recreational running.

    PubMed

    Baumgarten, Bob

    2007-01-01

    The increased likelihood of osteoarthritic change in the tibiofemoral joint following meniscectomy is well documented. This awareness often leads medical practitioners to advise patients previously engaged in recreational running who have undergone meniscectomy to cease all recreational running. This literature review examines the following questions: 1) Is there evidence to demonstrate that runners, post-meniscectomy, incur a great enough risk for early degenerative OA to cease all running? 2) Does the literature yield risk factors for early OA that would guide a physical therapist with regard to advising the post-meniscectomy patient contemplating a return to recreational running? Current literature related to meniscal structure and function, etiology and definition of osteoarthritis, methods for assessing osteoarthritis, relationship between running and osteoarthritis, and relationship between meniscectomy and osteoarthritis are reviewed. This review finds that while the probability for early osteoarthritis in the post-meniscectomy population is substantial, it is a probability and not a certainty. To help guide a physical therapist with regard to advising the patient for a safe return to running following a meniscectomy, a qualitative risk assessment based on identified risk factors for osteoarthritis in both the running and the post-meniscectomy populations is proposed.

  13. Upscaling of Hydraulic Conductivity using the Double Constraint Method

    NASA Astrophysics Data System (ADS)

    El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke

    2013-04-01

    The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.

  14. Communications network design and costing model users manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.

  15. Examining the impact of larval source management and insecticide-treated nets using a spatial agent-based model of Anopheles gambiae and a landscape generator tool

    PubMed Central

    2013-01-01

    Background Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. Methods A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. Results General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs’ mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Conclusions Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended. PMID:23965136

  16. Acute and Chronic Exercise in Animal Models.

    PubMed

    Thu, Vu Thi; Kim, Hyoung Kyu; Han, Jin

    2017-01-01

    Numerous animal cardiac exercise models using animal subjects have been established to uncover the cardiovascular physiological mechanism of exercise or to determine the effects of exercise on cardiovascular health and disease. In most cases, animal-based cardiovascular exercise modalities include treadmill running, swimming, and voluntary wheel running with a series of intensities, times, and durations. Those used animals include small rodents (e.g., mice and rats) and large animals (e.g., rabbits, dogs, goats, sheep, pigs, and horses). Depending on the research goal, each experimental protocol should also describe whether its respective exercise treatment can produce the anticipated acute or chronic cardiovascular adaptive response. In this chapter, we will briefly describe the most common kinds of animal models of acute and chronic cardiovascular exercises that are currently being conducted and are likely to be chosen in the near future. Strengths and weakness of animal-based cardiac exercise modalities are also discussed.

  17. Integrated photooxidative extractive deep desulfurization using metal doped TiO2 and eutectic based ionic liquid

    NASA Astrophysics Data System (ADS)

    Zaid, Hayyiratul Fatimah Mohd; Kait, Chong Fai; Mutalib, Mohamed Ibrahim Abdul

    2016-11-01

    A series of metal doped TiO2 namely Fe/TiO2, Cu/TiO2 and Cu-Fe/TiO2 were synthesized and characterized, to be used as a photocatalyst in the integrated photooxidative extractive deep desulfurization for model oil (dodecane) and diesel fuel. The order of the photocatalytic activity was Cu-Fe/TiO2 followed by Cu/TiO2 and then Fe/TiO2. Cu-Fe/TiO2 was an effective photocatalyst for sulfur conversion at ambient atmospheric pressure. Hydrogen peroxide was used as the source of oxidant and eutectic-based ionic liquid as the extractant. Sulfur conversion in model oil reached 100%. Removal of sulfur from model oil was done by two times extraction with a removal of 97.06% in the first run and 2.94% in the second run.

  18. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    NASA Technical Reports Server (NTRS)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  19. Completing and Adapting Models of Biological Processes

    NASA Technical Reports Server (NTRS)

    Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard

    2006-01-01

    We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.

  20. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  1. An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation

    DOE PAGES

    Nutaro, James

    2014-11-03

    In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.

  2. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  3. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  4. Specialist integrated haematological malignancy diagnostic services: an Activity Based Cost (ABC) analysis of a networked laboratory service model.

    PubMed

    Dalley, C; Basarir, H; Wright, J G; Fernando, M; Pearson, D; Ward, S E; Thokula, P; Krishnankutty, A; Wilson, G; Dalton, A; Talley, P; Barnett, D; Hughes, D; Porter, N R; Reilly, J T; Snowden, J A

    2015-04-01

    Specialist Integrated Haematological Malignancy Diagnostic Services (SIHMDS) were introduced as a standard of care within the UK National Health Service to reduce diagnostic error and improve clinical outcomes. Two broad models of service delivery have become established: 'co-located' services operating from a single-site and 'networked' services, with geographically separated laboratories linked by common management and information systems. Detailed systematic cost analysis has never been published on any established SIHMDS model. We used Activity Based Costing (ABC) to construct a cost model for our regional 'networked' SIHMDS covering a two-million population based on activity in 2011. Overall estimated annual running costs were £1 056 260 per annum (£733 400 excluding consultant costs), with individual running costs for diagnosis, staging, disease monitoring and end of treatment assessment components of £723 138, £55 302, £184 152 and £94 134 per annum, respectively. The cost distribution by department was 28.5% for haematology, 29.5% for histopathology and 42% for genetics laboratories. Costs of the diagnostic pathways varied considerably; pathways for myelodysplastic syndromes and lymphoma were the most expensive and the pathways for essential thrombocythaemia and polycythaemia vera being the least. ABC analysis enables estimation of running costs of a SIHMDS model comprised of 'networked' laboratories. Similar cost analyses for other SIHMDS models covering varying populations are warranted to optimise quality and cost-effectiveness in delivery of modern haemato-oncology diagnostic services in the UK as well as internationally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Appraisal of jump distributions in ensemble-based sampling algorithms

    NASA Astrophysics Data System (ADS)

    Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo

    2017-04-01

    Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.

  6. Nesting behavior of house mice (Mus domesticus) selected for increased wheel-running activity.

    PubMed

    Carter, P A; Swallow, J G; Davis, S J; Garland, T

    2000-03-01

    Nest building was measured in "active" (housed with access to running wheels) and "sedentary" (without wheel access) mice (Mus domesticus) from four replicate lines selected for 10 generations for high voluntary wheel-running behavior, and from four randombred control lines. Based on previous studies of mice bidirectionally selected for thermoregulatory nest building, it was hypothesized that nest building would show a negative correlated response to selection on wheel-running. Such a response could constrain the evolution of high voluntary activity because nesting has also been shown to be positively genetically correlated with successful production of weaned pups. With wheel access, selected mice of both sexes built significantly smaller nests than did control mice. Without wheel access, selected females also built significantly smaller nests than did control females, but only when body mass was excluded from the statistical model, suggesting that body mass mediated this correlated response to selection. Total distance run and mean running speed on wheels was significantly higher in selected mice than in controls, but no differences in amount of time spent running were measured, indicating a complex cause of the response of nesting to selection for voluntary wheel running.

  7. Software Simplifies the Sharing of Numerical Models

    NASA Technical Reports Server (NTRS)

    2014-01-01

    To ease the sharing of climate models with university students, Goddard Space Flight Center awarded SBIR funding to Reston, Virginia-based Parabon Computation Inc., a company that specializes in cloud computing. The firm developed a software program capable of running climate models over the Internet, and also created an online environment for people to collaborate on developing such models.

  8. Tracking the critical offshore conditions leading to marine inundation via active learning of full-process based models

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François

    2016-04-01

    From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.

  9. Numerical run-out modelling used for reassessment of existing permanent avalanche paths in the Krkonose Mts., Czechia

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Klimes, Jan; Balek, Jan; Taborik, Petr; Juras, Roman; Pavlasek, Jiri

    2015-04-01

    Run-out modelling of snow avalanches is being widely applied in high mountain areas worldwide. This study presents application of snow avalanche run-out calculation applied to mid-mountain ranges - the Krkonose, Jeseniky and Kralicky Sneznik Mountains. All mentioned mountain ranges lie in the northern part of Czechia, close to the border with Poland. Its highest peak reaches only 1602 m a.s.l. However, climatic conditions and regular snowpack presence are the reason why these mountain ranges experience considerable snow avalanche activity every year, sometimes resulting in injuries or even fatalities. Within the aim of an applied project dealing with snow avalanche hazard prediction a re-assessment of permanent snow avalanche paths has been performed based on extensive statistics covering period from 1961/62 till present. On each avalanche path different avalanches with different return periods were modelled using the RAMMS code. As a result, an up-to-date snow avalanche hazard map was prepared.

  10. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    USGS Publications Warehouse

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  11. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  12. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  13. A rapid estimation of near field tsunami run-up

    USGS Publications Warehouse

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  14. NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.

    PubMed

    Pardoe, Heath R; Kuzniecky, Ruben

    2018-01-01

    The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.

  15. Assimilation of GOES satellite-based convective initiation and cloud growth observations into the Rapid Refresh and HRRR systems to improve aviation forecast guidance

    NASA Astrophysics Data System (ADS)

    Mecikalski, John; Smith, Tracy; Weygandt, Stephen

    2014-05-01

    Latent heating profiles derived from GOES satellite-based cloud-top cooling rates are being assimilated into a retrospective version of the Rapid Refresh system (RAP) being run at the Global Systems Division. Assimilation of these data may help reduce the time lag for convection initiation (CI) in both the RAP model forecasts and in 3-km High Resolution Rapid Refresh (HRRR) model runs that are initialized off of the RAP model grids. These data may also improve both the location and organization of developing convective storm clusters, especially in the nested HRRR runs. These types of improvements are critical for providing better convective storm guidance around busy hub airports and aviation corridor routes, especially in the highly congested Ohio Valley - Northeast - Mid-Atlantic region. Additional work is focusing on assimilating GOES-R CI algorithm cloud-top cooling-based latent heating profiles directly into the HRRR model. Because of the small-scale nature of the convective phenomena depicted in the cloud-top cooling rate data (on the order of 1-4 km scale), direct assimilation of these data in the HRRR may be more effective than assimilation in the RAP. The RAP is an hourly assimilation system developed at NOAA/ESRL and was implemented at NCEP as a NOAA operational model in May 2012. The 3-km HRRR runs hourly out to 15 hours as a nest within the ESRL real-time experimental RAP. The RAP and HRRR both use the WRF ARW model core, and the Gridpoint Statistical Interpolation (GSI) is used within an hourly cycle to assimilate a wide variety of observations (including radar data) to initialize the RAP. Within this modeling framework, the cloud-top cooling rate-based latent heating profiles are applied as prescribed heating during the diabatic forward model integration part of the RAP digital filter initialization (DFI). No digital filtering is applied on the 3-km HRRR grid, but similar forward model integration with prescribed heating is used to assimilate information from radar reflectivity, lightning flash density and the satellite based cloud-top cooling rate data. In the current HRRR configuration, 4 15-min cycles of latent heating are applied during a pre-forecast hour of integration. This is followed by a final application of GSI at 3-km to fit the latest conventional observation data. At the conference, results from a 5-day retrospective period (July 5-10, 2012) will be shown, focusing on assessment of data impact for both the RAP and HRRR, as well as the sensitivity to various assimilation parameters, including assumed heating strength. Emphasis will be given to documenting the forecast impacts for aviation applications in the Eastern U.S.

  16. Plume Tracker: A New Toolkit for the Mapping of Volcanic Plumes with Multispectral Thermal Infrared Remote Sensing

    NASA Astrophysics Data System (ADS)

    Realmuto, V. J.; Baxter, S.; Webley, P. W.

    2011-12-01

    Plume Tracker is the next generation of interactive plume mapping tools pioneered by MAP_SO2. First developed in 1995, MAP_SO2 has been used to study plumes at a number of volcanoes worldwide with data acquired by both airborne and space-borne instruments. The foundation of these tools is a radiative transfer (RT) model, based on MODTRAN, which we use as the forward model for our estimation of ground temperature and sulfur dioxide concentration. Plume Tracker retains the main functions of MAP_SO2, providing interactive tools to input radiance measurements and ancillary data, such as profiles of atmospheric temperature and humidity, to the retrieval procedure, generating the retrievals, and visualizing the resulting retrievals. Plume Tracker improves upon MAP_SO2 in the following areas: (1) an RT model based on an updated version of MODTRAN, (2) a retrieval procedure based on maximizing the vector projection of model spectra onto observed spectra, rather than minimizing the least-squares misfit between the model and observed spectra, (3) an ability to input ozone profiles to the RT model, (4) increased control over the vertical distribution of the atmospheric gas species used in the model, (5) a standard programmatic interface to the RT model code, based on the Component Object Model (COM) interface, which will provide access to any programming language that conforms to the COM standard, and (6) a new binning algorithm that decreases running time by exploiting spatial redundancy in the radiance data. Based on our initial testing, the binning algorithm can reduce running time by an order of magnitude. The Plume Tracker project is a collaborative effort between the Jet Propulsion Laboratory and Geophysical Institute (GI) of the University of Alaska-Fairbanks. Plume Tracker is integrated into the GI's operational plume dispersion modeling system and will ingest temperature and humidity profiles generated by the Weather Research and Forecasting model, together with plume height estimates from the Puff model. The access to timely forecasts of atmospheric conditions, together with the reductions in running time, will increase the utility of Plume Tracker in the Alaska Volcano Observatory's mission to mitigate volcanic hazards in Alaska and the Northern Pacific region.

  17. Assimilation of Cloud Information in Numerical Weather Prediction Model in Southwest China

    NASA Astrophysics Data System (ADS)

    HENG, Z.

    2016-12-01

    Based on the ARPS Data Analysis System (ADAS), Weather Research and Forecasting (WRF) model, simulation experiments from July 1st 2015 to August 1st 2015 are conducted in the region of Southwest China. In the assimilation experiment (EXP), datasets from surface observations are assimilated, cloud information from weather Doppler radar, Fengyun-2E (FY-2E) geostationary satellite are retrieved by using the complex cloud analysis scheme in the ADAS, to insert microphysical variables and adjust the humility structure in the initial condition. As a control run (CTL), datasets from surface observations are assimilated, but no cloud information is used in the ADAS. The simulation result of a rainstorm caused by the Southwest Vortex during 14-15 July 2015 shows that, the EXP run has a better capability in representing the shape and intensity of precipitation, especially the center of rainstorm. The one-month inter-comparison of the initial and prediction results between the EXP and CTL runs reveled that, EXP runs can present a more reasonable phenomenon of rain and get a higher score in the rain prediction. Keywords: NWP, rainstorm, Data assimilation

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markidis, S.; Rizwan, U.

    The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less

  19. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  20. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  1. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  2. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  3. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  4. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...

    2017-07-10

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  5. Dealing with Resistance to Reform

    NASA Astrophysics Data System (ADS)

    Reif, Marc

    2008-09-01

    About the author: Any teacher, new or experienced, will at times face criticism, resistance, and even hostility from students, parents, and colleagues. An inexperienced teacher who runs a reformed classroom based on classroom discourse and "interactive engagement," both of which may run counter to school culture, risks resistance or even outright revolt. In this essay, which grew out of discussions on the Modeling Instruction Program Listserv, Marc Reif shares some strategies that can be used to forestall or deal with classroom opposition.

  6. B physics and Quarkonia in CMS

    NASA Astrophysics Data System (ADS)

    Fiorendi, Sara

    2017-12-01

    The heavy-flavor sector offers the opportunity to perform indirect tests of beyond the Standard Model physics through precision measurements and of quantum chromodynamics (QCD) through particle production studies. A review of recent measurements on heavy flavors from the CMS experiment is presented. Results are based on data collected during LHC Run I and Run II and include measurements of heavy flavor production and properties, rare decays, CP violation, exotic and standard quarkonia. Warning, no authors found for 2017EPJWC.16407006.

  7. Modeling Large-Scale Networks Using Virtual Machines and Physical Appliances

    DTIC Science & Technology

    2014-01-27

    downloaded and run locally. The lab solution couldn’t be based on ActiveX because the military Report Documentation Page Form ApprovedOMB No. 0704-0188...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 disallowed ActiveX support on...its systems, which made running an RDP client over ActiveX not possible. The challenges the SEI encountered in delivering the instruction were

  8. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  9. Distributed Assimilation of Satellite-based Snow Extent for Improving Simulated Streamflow in Mountainous, Dense Forests: An Example Over the DMIP2 Western Basins

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Peters-Lidard, Christa D.; Koren, Victor; Cosgrove, Brian A.; DeGoncalves, Luis G. D.; Smith, Michael; Geiger, James; Cui, Zhengtao; Borak, Jordan; Kumar, Sujay V.; hide

    2012-01-01

    Snow cover area affects snowmelt, soil moisture, evapotranspiration, and ultimately streamflow. For the Distributed Model Intercomparison Project - Phase 2 Western basins, we assimilate satellite-based fractional snow cover area (fSCA) from the Moderate Resolution Imaging Spectroradiometer, or MODIS, into the National Weather Service (NWS) SNOW-17 model. This model is coupled with the NWS Sacramento Heat Transfer (SAC-HT) model inside the National Aeronautics and Space Administration's (NASA) Land Information System. SNOW-17 computes fSCA from snow water equivalent (SWE) values using an areal depletion curve. Using a direct insertion, we assimilate fSCAs in two fully distributed ways: 1) we update the curve by attempting SWE preservation, and 2) we reconstruct SWEs using the curve. The preceding are refinements of an existing simple, conceptually-guided NWS algorithm. Satellite fSCA over dense forests inadequately accounts for below-canopy snow, degrading simulated streamflow upon assimilation during snowmelt. Accordingly, we implement a below-canopy allowance during assimilation. This simplistic allowance and direct insertion are found to be inadequate for improving calibrated results, still degrading them as mentioned above. However, for streamflow volume for the uncalibrated runs, we obtain: (1) substantial to major improvements (64-81 %) as a percentage of the control run residuals (or distance from observations), and (2) minor improvements (16-22 %) as a percentage of observed values. We highlight the need for detailed representations of canopy-snow optical radiative transfer processes in mountainous, dense forest regions if assimilation-based improvements are to be seen in calibrated runs over these areas.

  10. First International Diagnosis Competition - DXC'09

    NASA Technical Reports Server (NTRS)

    Kurtoglu, tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.

  11. The natural oscillation of two types of ENSO events based on analyses of CMIP5 model control runs

    NASA Astrophysics Data System (ADS)

    Xu, Kang; Su, Jingzhi; Zhu, Congwen

    2014-07-01

    The eastern- and central-Pacific El Niño-Southern Oscillation (EP- and CP-ENSO) have been found to be dominant in the tropical Pacific Ocean, and are characterized by interannual and decadal oscillation, respectively. In the present study, we defined the EP- and CP-ENSO modes by singular value decomposition (SVD) between SST and sea level pressure (SLP) anomalous fields. We evaluated the natural features of these two types of ENSO modes as simulated by the pre-industrial control runs of 20 models involved in phase five of the Coupled Model Intercomparison Project (CMIP5). The results suggested that all the models show good skill in simulating the SST and SLP anomaly dipolar structures for the EP-ENSO mode, but only 12 exhibit good performance in simulating the tripolar CP-ENSO modes. Wavelet analysis suggested that the ensemble principal components in these 12 models exhibit an interannual and multi-decadal oscillation related to the EP- and CP-ENSO, respectively. Since there are no changes in external forcing in the pre-industrial control runs, such a result implies that the decadal oscillation of CP-ENSO is possibly a result of natural climate variability rather than external forcing.

  12. Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model

    NASA Technical Reports Server (NTRS)

    Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)

    1982-01-01

    The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.

  13. Gait kinematics of subjects with ankle instability using a multisegmented foot model.

    PubMed

    De Ridder, Roel; Willems, Tine; Vanrenterghem, Jos; Robinson, Mark; Pataky, Todd; Roosen, Philip

    2013-11-01

    Many patients who sustain an acute lateral ankle sprain develop chronic ankle instability (CAI). Altered ankle kinematics have been reported to play a role in the underlying mechanisms of CAI. In previous studies, however, the foot was modeled as one rigid segment, ignoring the complexity of the ankle and foot anatomy and kinematics. The purpose of this study was to evaluate stance phase kinematics of subjects with CAI, copers, and controls during walking and running using both a rigid and a multisegmented foot model. Foot and ankle kinematics of 77 subjects (29 subjects with self-reported CAI, 24 copers, and 24 controls) were measured during barefoot walking and running using a rigid foot model and a six-segment Ghent Foot Model. Data were collected on a 20-m-long instrumented runway embedded with a force plate and a six-camera optoelectronic system. Groups were compared using statistical parametric mapping. Both the CAI and the coper group showed similar differences during midstance and late stance compared with the control group (P < 0.05). The rigid foot segment showed a more everted position during walking compared with the control group. Based on the Ghent Foot Model, the rear foot also showed a more everted position during running. The medial forefoot showed a more inverted position for both running and walking compared with the control group. Our study revealed significant midstance and late stance differences in rigid foot, rear foot, and medial forefoot kinematics The multisegmented foot model demonstrated intricate behavior of the foot that is not detectable with rigid foot modeling. Further research using these models is necessary to expand knowledge of foot kinematics in subjects with CAI.

  14. Gravitational baryogenesis in running vacuum models

    NASA Astrophysics Data System (ADS)

    Oikonomou, V. K.; Pan, Supriya; Nunes, Rafael C.

    2017-08-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless of whether these models can produce a viable cosmological evolution, we demonstrate that they produce a nonzero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.

  15. Radar Unix: a complete package for GPR data processing

    NASA Astrophysics Data System (ADS)

    Grandjean, Gilles; Durand, Herve

    1999-03-01

    A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.

  16. Effects of Meteorological Data Quality on Snowpack Modeling

    NASA Astrophysics Data System (ADS)

    Havens, S.; Marks, D. G.; Robertson, M.; Hedrick, A. R.; Johnson, M.

    2017-12-01

    Detailed quality control of meteorological inputs is the most time-intensive component of running the distributed, physically-based iSnobal snow model, and the effect of data quality of the inputs on the model is unknown. The iSnobal model has been run operationally since WY2013, and is currently run in several basins in Idaho and California. The largest amount of user input during modeling is for the quality control of precipitation, temperature, relative humidity, solar radiation, wind speed and wind direction inputs. Precipitation inputs require detailed user input and are crucial to correctly model the snowpack mass. This research applies a range of quality control methods to meteorological input, from raw input with minimal cleaning, to complete user-applied quality control. The meteorological input cleaning generally falls into two categories. The first is global minimum/maximum and missing value correction that could be corrected and/or interpolated with automated processing. The second category is quality control for inputs that are not globally erroneous, yet are still unreasonable and generally indicate malfunctioning measurement equipment, such as temperature or relative humidity that remains constant, or does not correlate with daily trends observed at nearby stations. This research will determine how sensitive model outputs are to different levels of quality control and guide future operational applications.

  17. SIM_EXPLORE: Software for Directed Exploration of Complex Systems

    NASA Technical Reports Server (NTRS)

    Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.

    2013-01-01

    Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.

  18. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  19. Impact and intrusion of the foot of a lizard running rapidly on sand

    NASA Astrophysics Data System (ADS)

    Li, Chen; Hsieh, Tonia; Umbanhowar, Paul; Goldman, Daniel

    2012-11-01

    The desert-dwelling zebra-tailed lizard (Callisaurus draconoides, 10 cm, 10 g) runs rapidly (~10 BL/s) on granular media (GM) like sand and gravel. On loosely packed GM, its large hind feet penetrate into the substrate during each step. Based on above-ground observation, a previous study (Li et al., JEB 2012) hypothesized that the hind foot rotated in the vertical plane subsurface to generate lift. To explain the observed center-of-mass dynamics, the model assumed that ground reaction force was dominated by speed-independent frictional drag. Here we use x-ray high speed video to obtain subsurface foot kinematics of the lizard running on GM, which confirms the hypothesized subsurface foot rotation following rapid foot impact at touchdown. However, using impact force measurements, a resistive force model, and the observed foot kinematics, we find that impact force during initial foot touchdown and speed-independent frictional drag during rotation only account for part of the required lift to support locomotion. This suggests that the rapid foot rotation further allows the lizard to utilize inertial forces from the local acceleration of the substrate (particles), similar to small robots running on GM (Qian et al., RSS 2012) and the basilisk (Jesus) lizard running on water.

  20. Is there really an eccentric action of the hamstrings during the swing phase of high-speed running? part I: A critical review of the literature.

    PubMed

    Van Hooren, Bas; Bosch, Frans

    2017-12-01

    It is widely assumed that there is an eccentric hamstring muscle fibre action during the swing phase of high-speed running. However, animal and modelling studies in humans show that the increasing distance between musculotendinous attachment points during forward swing is primarily due to passive lengthening associated with the take-up of muscle slack. Later in the swing phase, the contractile element (CE) maintains a near isometric action while the series elastic (tendinous) element first stretches as the knee extends, and then recoils causing the swing leg to forcefully retract prior to ground contact. Although modelling studies showed some active lengthening of the contractile (muscular) element during the mid-swing phase of high-speed running, we argue that the increasing distance between the attachment points should not be interpreted as an eccentric action of the CE due to the effects of muscle slack. Therefore, there may actually be no significant eccentric, but rather predominantly an isometric action of the hamstrings CE during the swing phase of high-speed running when the attachment points of the hamstrings are moving apart. Based on this, we propose that isometric rather than eccentric exercises are a more specific way of conditioning the hamstrings for high-speed running.

  1. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

    2016-01-01

    This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

  2. A user-friendly, menu-driven, language-free laser characteristics curves graphing program for desk-top IBM PC compatible computers

    NASA Technical Reports Server (NTRS)

    Klutz, Glenn

    1989-01-01

    A facility was established that uses collected data and feeds it into mathematical models that generate improved data arrays by correcting for various losses, base line drift, and conversion to unity scaling. These developed data arrays have headers and other identifying information affixed and are subsequently stored in a Laser Materials and Characteristics data base which is accessible to various users. The two part data base: absorption - emission spectra and tabulated data, is developed around twelve laser models. The tabulated section of the data base is divided into several parts: crystalline, optical, mechanical, and thermal properties; aborption and emission spectra information; chemical name and formulas; and miscellaneous. A menu-driven, language-free graphing program will reduce and/or remove the requirement that users become competent FORTRAN programmers and the concomitant requirement that they also spend several days to a few weeks becoming conversant with the GEOGRAF library and sequence of calls and the continual refreshers of both. The work included becoming thoroughly conversant with or at least very familiar with GEOGRAF by GEOCOMP Corp. The development of the graphing program involved trial runs of the various callable library routines on dummy data in order to become familiar with actual implementation and sequencing. This was followed by trial runs with actual data base files and some additional data from current research that was not in the data base but currently needed graphs. After successful runs, with dummy and real data, using actual FORTRAN instructions steps were undertaken to develop the menu-driven language-free implementation of a program which would require the user only know how to use microcomputers. The user would simply be responding to items displayed on the video screen. To assist the user in arriving at the optimum values needed for a specific graph, a paper, and pencil check list was made available to use on the trial runs.

  3. Pushing HTCondor and glideinWMS to 200K+ Jobs in a Global Pool for CMS before Run 2

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Belforte, S.; Bockelman, B.; Gutsche, O.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mason, D.; McCrea, A.; Saiz-Santos, M.; Sfiligoi, I.

    2015-12-01

    The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The combined pool size is expected to reach 200K job slots, which is significantly bigger than any other multi-user HTCondor based system currently in production. To get there we have studied scaling limitations in our existing pools, the biggest of which tops out at about 70K slots, providing valuable feedback to the development communities, who have responded by delivering improvements which have helped us reach higher and higher scales with more stability. We have also worked on improving the organization and support model for this critical service during Run 2 of the LHC. This contribution will present the results of the scale testing and experiences from the first months of running the Global Pool.

  4. Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...

  5. Autoshaping and Automaintenance: A Neural-Network Approach

    ERIC Educational Resources Information Center

    Burgos, Jose E.

    2007-01-01

    This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an "A-B-A" design…

  6. Sequential Design of Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela

    2017-06-30

    A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of COmore » 2 capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO 2 weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.« less

  7. Steady-state, lumped-parameter model for capacitor-run, single-phase induction motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umans, S.D.

    1996-01-01

    This paper documents a technique for deriving a steady-state, lumped-parameter model for capacitor-run, single-phase induction motors. The objective of this model is to predict motor performance parameters such as torque, loss distribution, and efficiency as a function of applied voltage and motor speed as well as the temperatures of the stator windings and of the rotor. The model includes representations of both the main and auxiliary windings (including arbitrary external impedances) and also the effects of core and rotational losses. The technique can be easily implemented and the resultant model can be used in a wide variety of analyses tomore » investigate motor performance as a function of load, speed, and winding and rotor temperatures. The technique is based upon a coupled-circuit representation of the induction motor. A notable feature of the model is the technique used for representing core loss. In equivalent-circuit representations of transformers and induction motors, core loss is typically represented by a core-loss resistance in shunt with the magnetizing inductance. In order to maintain the coupled-circuit viewpoint adopted in this paper, this technique was modified slightly; core loss is represented by a set of core-loss resistances connected to the ``secondaries`` of a set of windings which perfectly couple to the air-gap flux of the motor. An example of the technique is presented based upon a 3.5 kW, single-phase, capacitor-run motor and the validity of the technique is demonstrated by comparing predicted and measured motor performance.« less

  8. The Invasive Species Forecasting System (ISFS): An iRODS-Based, Cloud-Enabled Decision Support System for Invasive Species Habitat Suitability Modeling

    NASA Technical Reports Server (NTRS)

    Gill, Roger; Schnase, John L.

    2012-01-01

    The Invasive Species Forecasting System (ISFS) is an online decision support system that allows users to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of interest, such as a national park, monument, forest, or refuge. Target customers for ISFS are natural resource managers and decision makers who have a need for scientifically valid, model- based predictions of the habitat suitability of plant species of management concern. In a joint project involving NASA and the Maryland Department of Natural Resources, ISFS has been used to model the potential distribution of Wavyleaf Basketgrass in Maryland's Chesapeake Bay Watershed. Maximum entropy techniques are used to generate predictive maps using predictor datasets derived from remotely sensed data and climate simulation outputs. The workflow to run a model is implemented in an iRODS microservice using a custom ISFS file driver that clips and re-projects data to geographic regions of interest, then shells out to perform MaxEnt processing on the input data. When the model completes, all output files and maps from the model run are registered in iRODS and made accessible to the user. The ISFS user interface is a web browser that uses the iRODS PHP client to interact with the ISFS/iRODS- server. ISFS is designed to reside in a VMware virtual machine running SLES 11 and iRODS 3.0. The ISFS virtual machine is hosted in a VMware vSphere private cloud infrastructure to deliver the online service.

  9. Evaluation of the Relationship Between Coral Damage and Tsunami Dynamics; Case Study: 2009 Samoa Tsunami

    NASA Astrophysics Data System (ADS)

    Dilmen, Derya I.; Titov, Vasily V.; Roe, Gerard H.

    2015-12-01

    On September 29, 2009, an Mw = 8.1 earthquake at 17:48 UTC in Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga islands. Tutuila island, which is located 250 km from the earthquake epicenter, experienced tsunami flooding and strong currents on the north and east coasts, causing 34 fatalities (out of 192 total deaths from this tsunami) and widespread structural and ecological damage. The surrounding coral reefs also suffered heavy damage. The damage was formally evaluated based on detailed surveys before and immediately after the tsunami. This setting thus provides a unique opportunity to evaluate the relationship between tsunami dynamics and coral damage. In this study, estimates of the maximum wave amplitudes and coastal inundation of the tsunami are obtained with the MOST model (T itov and S ynolakis, J. Waterway Port Coast Ocean Eng: pp 171, 1998; T itov and G onzalez, NOAA Tech. Memo. ERL PMEL 112:11, 1997), which is now the operational tsunami forecast tool used by the National Oceanic and Atmospheric Administration (NOAA). The earthquake source function was constrained using the real-time deep-ocean tsunami data from three DART® (Deep-ocean Assessment and Reporting for Tsunamis) systems in the far field, and by tide-gauge observations in the near field. We compare the simulated run-up with observations to evaluate the simulation performance. We present an overall synthesis of the tide-gauge data, survey results of the run-up, inundation measurements, and the datasets of coral damage around the island. These data are used to assess the overall accuracy of the model run-up prediction for Tutuila, and to evaluate the model accuracy over the coral reef environment during the tsunami event. Our primary findings are that: (1) MOST-simulated run-up correlates well with observed run-up for this event ( r = 0.8), it tends to underestimated amplitudes over coral reef environment around Tutuila (for 15 of 31 villages, run-up is underestimated by more than 10 %; in only 5 was run-up overestimated by more than 10 %), and (2) the locations where the model underestimates run-up also tend to have experienced heavy or very heavy coral damage (8 of the 15 villages), whereas well-estimated run-up locations characteristically experience low or very low damage (7 of 11 villages). These findings imply that a numerical model may overestimate the energy loss of the tsunami waves during their interaction with the coral reef. We plan future studies to quantify this energy loss and to explore what improvements can be made in simulations of tsunami run-up when simulating coastal environments with fringing coral reefs.

  10. runDM: Running couplings of Dark Matter to the Standard Model

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  11. Linear solvation energy relationships in normal phase chromatography based on gradient separations.

    PubMed

    Wu, Di; Lucy, Charles A

    2017-09-22

    Coupling the modified Soczewiñski model and one gradient run, a gradient method was developed to build a linear solvation energy relationship (LSER) for normal phase chromatography. The gradient method was tested on dinitroanilinopropyl (DNAP) and silica columns with hexane/dichloromethane (DCM) mobile phases. LSER models built based on the gradient separation agree with those derived from a series of isocratic separations. Both models have similar LSER coefficients and comparable goodness of fit, but the LSER model based on gradient separation required fewer trial and error experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Windfield and trajectory models for tornado-propelled objects. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redmann, G.H.; Radbill, J.R.; Marte, J.E.

    1983-03-01

    This is the final report of a three-phased research project to develop a six-degree-of-freedom mathematical model to predict the trajectories of tornado-propelled objects. The model is based on the meteorological, aerodynamic, and dynamic processes that govern the trajectories of missiles in a tornadic windfield. The aerodynamic coefficients for the postulated missiles were obtained from full-scale wind tunnel tests on a 12-inch pipe and car and from drop tests. Rocket sled tests were run whereby the 12-inch pipe and car were injected into a worst-case tornado windfield in order to verify the trajectory model. To simplify and facilitate the use ofmore » the trajectory model for design applications without having to run the computer program, this report gives the trajectory data for NRC-postulated missiles in tables based on given variables of initial conditions of injection and tornado windfield. Complete descriptions of the tornado windfield and trajectory models are presented. The trajectory model computer program is also included for those desiring to perform trajectory or sensitivity analyses beyond those included in the report or for those wishing to examine other missiles and use other variables.« less

  13. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  14. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  15. Influence of "J"-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion.

    PubMed

    Wang, Runxiao; Zhao, Wentao; Li, Shujun; Zhang, Shunqi

    2016-01-01

    Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of "J"-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with "J"-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with "J"-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s -1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with "J"-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness.

  16. Influence of “J”-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion

    PubMed Central

    2016-01-01

    Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of “J”-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with “J”-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with “J”-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s−1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with “J”-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness. PMID:28018127

  17. Validating an operational physical method to compute surface radiation from geostationary satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.

    We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less

  18. Anchoring quartet-based phylogenetic distances and applications to species tree reconstruction.

    PubMed

    Sayyari, Erfan; Mirarab, Siavash

    2016-11-11

    Inferring species trees from gene trees using the coalescent-based summary methods has been the subject of much attention, yet new scalable and accurate methods are needed. We introduce DISTIQUE, a new statistically consistent summary method for inferring species trees from gene trees under the coalescent model. We generalize our results to arbitrary phylogenetic inference problems; we show that two arbitrarily chosen leaves, called anchors, can be used to estimate relative distances between all other pairs of leaves by inferring relevant quartet trees. This results in a family of distance-based tree inference methods, with running times ranging between quadratic to quartic in the number of leaves. We show in simulated studies that DISTIQUE has comparable accuracy to leading coalescent-based summary methods and reduced running times.

  19. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  20. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  1. Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data

    NASA Astrophysics Data System (ADS)

    Fuentes, M.

    2015-12-01

    The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.

  2. When Success Pays: Lessons Learned from Arkansas' Move to Performance-Based Funding

    ERIC Educational Resources Information Center

    Callaway, Collin

    2012-01-01

    In 2011, state legislators in Arkansas passed Act 1203, effectively enacting performance-based funding models for all state-run institutions of higher education. Over a period of five years beginning in 2013-14, 25 percent of every institution's base funding will be allocated according to performance. Under the direction of the Arkansas Department…

  3. Walking, running and the evolution of short toes in humans.

    PubMed

    Rolian, Campbell; Lieberman, Daniel E; Hamill, Joseph; Scott, John W; Werbel, William

    2009-03-01

    The phalangeal portion of the forefoot is extremely short relative to body mass in humans. This derived pedal proportion is thought to have evolved in the context of committed bipedalism, but the benefits of shorter toes for walking and/or running have not been tested previously. Here, we propose a biomechanical model of toe function in bipedal locomotion that suggests that shorter pedal phalanges improve locomotor performance by decreasing digital flexor force production and mechanical work, which might ultimately reduce the metabolic cost of flexor force production during bipedal locomotion. We tested this model using kinematic, force and plantar pressure data collected from a human sample representing normal variation in toe length (N=25). The effect of toe length on peak digital flexor forces, impulses and work outputs was evaluated during barefoot walking and running using partial correlations and multiple regression analysis, controlling for the effects of body mass, whole-foot and phalangeal contact times and toe-out angle. Our results suggest that there is no significant increase in digital flexor output associated with longer toes in walking. In running, however, multiple regression analyses based on the sample suggest that increasing average relative toe length by as little as 20% doubles peak digital flexor impulses and mechanical work, probably also increasing the metabolic cost of generating these forces. The increased mechanical cost associated with long toes in running suggests that modern human forefoot proportions might have been selected for in the context of the evolution of endurance running.

  4. Climate Change Impacts on US Agriculture and the Benefits of Greenhouse Gas Mitigation

    NASA Astrophysics Data System (ADS)

    Monier, E.; Sue Wing, I.; Stern, A.

    2014-12-01

    As contributors to the US EPA's Climate Impacts and Risk Assessment (CIRA) project, we present empirically-based projections of climate change impacts on the yields of five major US crops. Our analysis uses a 15-member ensemble of climate simulations using the MIT Integrated Global System Model (IGSM) linked to the NCAR Community Atmosphere Model (CAM), forced by 3 emissions scenarios (a "business as usual" reference scenario and two stabilization scenarios at 4.5W/m2 and 3.7 W/m2 by 2100), quantify the agricultural impacts avoided due to greenhouse gas emission reductions. Our innovation is the coupling of climate model outputs with empirical estimates of the long-run relationship between crop yields and temperature, precipitation and soil moisture derived from the co-variation between yields and weather across US counties over the last 50 years. Our identifying assumption is that since farmers' planting, management and harvesting decisions are based on land quality and expectations of weather, yields and meteorological variables share a long-run equilibrium relationship. In any given year, weather shocks cause yields to diverge from their expected long-run values, prompting farmers to revise their long-run expectations. We specify a dynamic panel error correction model (ECM) that statistically distinguishes these two processes. The ECM is estimated for maize, wheat, soybeans, sorghum and cotton using longitudinal data on production and harvested area for ~1,100 counties from 1948-2010, in conjunction with spatial fields of 3-hourly temperature, precipitation and soil moisture from the Global Land Data Assimilation System (GLDAS) forcing and output files, binned into annual counts of exposure over the growing season and mapped to county centroids. For scenarios of future warming the identical method was used to calculate counties' current (1986-2010) and future (2036-65 and 2086-2110) distributions of simulated 3-hourly growing season temperature, precipitation and soil moisture. Finally, we combine these variables with the fitted long-run response to obtain county-level yields under current average climate and projected future climate under our three warming scenarios. We close our presentation with a discussion of the implications for mitigation and adaptation decisions.

  5. DIaaS: Data-Intensive workflows as a service - Enabling easy composition and deployment of data-intensive workflows on Virtual Research Environments

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.

  6. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  7. Singlet-triplet fermionic dark matter and LHC phenomenology

    NASA Astrophysics Data System (ADS)

    Choubey, Sandhya; Khan, Sarif; Mitra, Manimala; Mondal, Subhadeep

    2018-04-01

    It is well known that for the pure standard model triplet fermionic WIMP-type dark matter (DM), the relic density is satisfied around 2 TeV. For such a heavy mass particle, the production cross-section at 13 TeV run of LHC will be very small. Extending the model further with a singlet fermion and a triplet scalar, DM relic density can be satisfied for even much lower masses. The lower mass DM can be copiously produced at LHC and hence the model can be tested at collider. For the present model we have studied the multi jet (≥ 2 j) + missing energy ([InlineEquation not available: see fulltext.]) signal and show that this can be detected in the near future of the LHC 13 TeV run. We also predict that the present model is testable by the earth based DM direct detection experiments like Xenon-1T and in future by Darwin.

  8. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  9. Effects of dynamic agricultural decision making in an ecohydrological model

    NASA Astrophysics Data System (ADS)

    Reichenau, T. G.; Krimly, T.; Schneider, K.

    2012-04-01

    Due to various interdependencies between the cycles of water, carbon, nitrogen, and energy the impacts of climate change on ecohydrological systems can only be investigated in an integrative way. Furthermore, the human intervention in the environmental processes makes the system even more complex. On the one hand human impact affects natural systems. On the other hand the changing natural systems have a feedback on human decision making. One of the most important examples for this kind of interaction can be found in the agricultural sector. Management dates (planting, fertilization, harvesting) are chosen based on meteorological conditions and yield expectations. A faster development of crops under a warmer climate causes shorter cropping seasons. The choice of crops depends on their profitability, which is mainly determined by market prizes, the agro-political framework, and the (climate dependent) crop yield. This study investigates these relations for the district Günzburg located in the Upper Danube catchment in southern Germany. The modeling system DANUBIA was used to perform dynamically coupled simulations of plant growth, surface and soil hydrological processes, soil nitrogen transformations, and agricultural decision making. The agro-economic model simulates decisions on management dates (based on meteorological conditions and the crops' development state), on fertilization intensities (based on yield expectations), and on choice of crops (based on profitability). The environmental models included in DANUBIA are to a great extent process based to enable its use in a climate change scenario context. Scenario model runs until 2058 were performed using an IPCC A1B forcing. In consecutive runs, dynamic crop management, dynamic crop selection, and a changing agro-political framework were activated. Effects of these model features on hydrological and ecological variables were analyzed separately by comparing the results to a model run with constant crop distribution and constant management. Results show that the influence of the modeled dynamic management adaptation on variables like transpiration, carbon uptake, or nitrate leaching from the vadose zone is stronger than the influence of a dynamic choice of crops. Climate change was found to have a stronger impact on this modeled choice of crops than the agro-political framework. These results suggest that scenario studies in areas with a large share of arable land should take into account management adaptations to changing climate.

  10. "Development of an interactive crop growth web service architecture to review and forecast agricultural sustainability"

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.; Walden, V. P.

    2014-12-01

    As climate change and weather variability raise issues regarding agricultural production, agricultural sustainability has become an increasingly important component for farmland management (Fisher, 2005, Akinci, 2013). Yet with changes in soil quality, agricultural practices, weather, topography, land use, and hydrology - accurately modeling such agricultural outcomes has proven difficult (Gassman et al, 2007, Williams et al, 1995). This study examined agricultural sustainability and soil health over a heterogeneous multi-watershed area within the Inland Pacific Northwest of the United States (IPNW) - as part of a five year, USDA funded effort to explore the sustainability of cereal production systems (Regional Approaches to Climate Change for Pacific Northwest Agriculture - award #2011-68002-30191). In particular, crop growth and soil erosion were simulated across a spectrum of variables and time periods - using the CropSyst crop growth model (Stockle et al, 2002) and the Water Erosion Protection Project Model (WEPP - Flanagan and Livingston, 1995), respectively. A preliminary range of historical scenarios were run, using a high-resolution, 4km gridded dataset of surface meteorological variables from 1979-2010 (Abatzoglou, 2012). In addition, Coupled Model Inter-comparison Project (CMIP5) global climate model (GCM) outputs were used as input to run crop growth model and erosion future scenarios (Abatzoglou and Brown, 2011). To facilitate our integrated data analysis efforts, an agricultural sustainability web service architecture (THREDDS/Java/Python based) is under development, to allow for the programmatic uploading, sharing and processing of variable input data, running model simulations, as well as downloading and visualizing output results. The results of this study will assist in better understanding agricultural sustainability and erosion relationships in the IPNW, as well as provide a tangible server-based tool for use by researchers and farmers - for both small scale field examination, or more regionalized scenarios.

  11. Finite element analysis of pedestrian lower limb fractures by direct force: the result of being run over or impact?

    PubMed

    Li, Zhengdong; Zou, Donghua; Liu, Ningguo; Zhong, Liangwei; Shao, Yu; Wan, Lei; Huang, Ping; Chen, Yijiu

    2013-06-10

    The elucidation and prediction of the biomechanics of lower limb fractures could serve as a useful tool in forensic practices. Finite element (FE) analysis could potentially help in the understanding of the fracture mechanisms of lower limb fractures frequently caused by car-pedestrian accidents. Our aim was (1) to develop and validate a FE model of the human lower limb, (2) to assess the biomechanics of specific injuries concerning run-over and impact loading conditions, and (3) to reconstruct one real car-pedestrian collision case using the model created in this study. We developed a novel lower limb FE model and simulated three different loading scenarios. The geometry of the model was reconstructed using Mimics 13.0 based on computed tomography (CT) scans from an actual traffic accident. The material properties were based upon a synthesis of data found in published literature. The FE model validation and injury reconstruction were conducted using the LS-DYNA code. The FE model was validated by a comparison of the simulation results of three-point bending, overall lateral impact tests and published postmortem human surrogate (PMHS) results. Simulated loading scenarios of running-over the thigh with a wheel, the impact on the upper leg, and impact on the lower thigh were conducted with velocities of 10 m/s, 20 m/s, and 40 m/s, respectively. We compared the injuries resulting from one actual case with the simulated results in order to explore the possible fracture bio-mechanism. The peak fracture forces, maximum bending moments, and energy lost ratio exhibited no significant differences between the FE simulations and the literature data. Under simulated run-over conditions, the segmental fracture pattern was formed and the femur fracture patterns and mechanisms were consistent with the actual injury features of the case. Our study demonstrated that this simulation method could potentially be effective in identifying forensic cases and exploring of the injury mechanisms of lower limb fractures encountered due to inflicted lesions. This model can also help to distinguish between possible and impossible scenarios. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  13. BeatBox-HPC simulation environment for biophysically and anatomically realistic cardiac electrophysiology.

    PubMed

    Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V

    2017-01-01

    The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.

  14. Vive la Difference: What It Means for State Boards to Embrace Two Models for Public Education

    ERIC Educational Resources Information Center

    Smarick, Andy

    2017-01-01

    The charter school model differs fundamentally from the district-based model of public education delivery that is still dominant in every state. Instead of creating government bodies that directly operate all of an area's public schools, the state approves entities that authorize and oversee schools run by nonprofit organizations. In this article,…

  15. Next Generation Transport Phenomenology Model

    NASA Technical Reports Server (NTRS)

    Strickland, Douglas J.; Knight, Harold; Evans, J. Scott

    2004-01-01

    This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.

  16. Case Studies of Forecasting Ionospheric Total Electron Content

    NASA Astrophysics Data System (ADS)

    Mannucci, A. J.; Meng, X.; Verkhoglyadova, O. P.; Tsurutani, B.; McGranaghan, R. M.

    2017-12-01

    We report on medium-range forecast-mode runs of ionosphere-thermosphere coupled models that calculate ionospheric total electron content (TEC), focusing on low-latitude daytime conditions. A medium-range forecast-mode run refers to simulations that are driven by inputs that can be predicted 2-3 days in advance, for example based on simulations of the solar wind. We will present results from a weak geomagnetic storm caused by a high-speed solar wind stream on June 29, 2012. Simulations based on the Global Ionosphere Thermosphere Model (GITM) and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIEGCM) significantly over-estimate TEC in certain low latitude daytime regions, compared to TEC maps based on observations. We will present the results from a more intense coronal mass ejection (CME) driven storm where the simulations are closer to observations. We compare high latitude data sets to model inputs, such as auroral boundary and convection patterns, to assess the degree to which poorly estimated high latitude drivers may be the largest cause of discrepancy between simulations and observations. Our results reveal many factors that can affect the accuracy of forecasts, including the fidelity of empirical models used to estimate high latitude precipitation patterns, or observation proxies for solar EUV spectra, such as the F10.7 index. Implications for forecasts with few-day lead times are discussed

  17. Modeling Anaerobic Soil Organic Carbon Decomposition in Arctic Polygon Tundra: Insights into Soil Geochemical Influences on Carbon Mineralization: Modeling Archive

    DOE Data Explorer

    Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David

    2018-06-13

    This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.

  18. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  19. Finite element modelling of Plantar Fascia response during running on different surface types

    NASA Astrophysics Data System (ADS)

    Razak, A. H. A.; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Hashim, M. S. M.; Daud, R.

    2017-10-01

    Plantar fascia is a ligament found in human foot structure located beneath the skin of human foot that functioning to stabilize longitudinal arch of human foot during standing and normal gait. To perform direct experiment on plantar fascia seems very difficult since the structure located underneath the soft tissue. The aim of this study is to develop a finite element (FE) model of foot with plantar fascia and investigate the effect of the surface hardness on biomechanical response of plantar fascia during running. The plantar fascia model was developed using Solidworks 2015 according to the bone structure of foot model that was obtained from Turbosquid database. Boundary conditions were set out based on the data obtained from experiment of ground reaction force response during running on different surface hardness. The finite element analysis was performed using Ansys 14. The results found that the peak of stress and strain distribution were occur on the insertion of plantar fascia to bone especially on calcaneal area. Plantar fascia became stiffer with increment of Young’s modulus value and was able to resist more loads. Strain of plantar fascia was decreased when Young’s modulus increased with the same amount of loading.

  20. A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1998-01-01

    Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.

  1. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  2. [The functional sport shoe parameter "torsion" within running shoe research--a literature review].

    PubMed

    Michel, F I; Kälin, X; Metzger, A; Westphal, K; Schweizer, F; Campe, S; Segesser, B

    2009-12-01

    Within the sport shoe area torsion is described as the twisting and decoupling of the rear-, mid- and forefoot along the longitudinal axis of the foot. Studies have shown that running shoes restrict the torsion of the foot and thus they increase the pronation of the foot. Based on the findings, it is recommended to design running shoes, which allow the natural freedom of movement of the foot. The market introduction of the first torsion concept through adidas(R) took place in 1989. Independently of the first market introduction, only one epidemiological study was conducted in the running shoe area. The study should investigate the occurrence of Achilles tendon problems of the athletes running in the new "adidas Torsion(R) shoes". However, further studies quantifying the optimal region of torsionability concerning the reduction of injury incidence are still missing. Newer studies reveal that the criterion torsion only plays a secondary roll regarding the buying decision. Moreover, athletes are not able to perceive torsionability as a discrete functional parameter. It is to register, that several workgroups are dealing intensively with the detailed analysis of the foot movement based on kinematic multi-segment-models. However, scientific as well as popular scientific contributions display that the original idea of the torsion concept is still not completely understood. Hence, the "inverse" characteristic is postulated. The present literature review leads to the deduction that the functional characteristics of the torsion concept are not fully implemented within the running shoe area. This implies the necessity of scientific studies, which investigate the relevance of a functional torsion concept regarding injury prevention based on basic and applied research. Besides, biomechanical studies should analyse systematically the mechanism and the effects of torsion relevant technologies and systems.

  3. [CLIMATE CHANGE AND ALLERGIC AIRWAY DISEASE] OBSERVATIONAL,LABORATORY, AND MODELING STUDIES OF THE IMPACTS OF CLIMATE CHANGE ONALLERGIC AIRWAY DISEASE

    EPA Science Inventory

    Based on these data and preliminary studies, this proposal will be composed of a multiscale source-to-dose analysis approach for assessing the exposure interactions of environmental and biological systems. Once the entire modeling system is validated, it will run f...

  4. Applications products of aviation forecast models

    NASA Technical Reports Server (NTRS)

    Garthner, John P.

    1988-01-01

    A service called the Optimum Path Aircraft Routing System (OPARS) supplies products based on output data from the Naval Oceanographic Global Atmospheric Prediction System (NOGAPS), a model run on a Cyber-205 computer. Temperatures and winds are extracted from the surface to 100 mb, approximately 55,000 ft. Forecast winds are available in six-hour time steps.

  5. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  6. Determination of three-dimensional joint loading within the lower extremities in snowboarding.

    PubMed

    Krüger, Andreas; McAlpine, Paul; Borrani, Fabio; Edelmann-Nusser, Jürgen

    2012-02-01

    In the biomechanical literature only a few studies are available focusing on the determination of joint loading within the lower extremities in snowboarding. These studies are limited to analysis in a restricted capture volume due to the use of optical video-based systems. To overcome this restriction the aim of the present study was to develop a method to determine net joint moments within the lower extremities in snowboarding for complete measurement runs. An experienced snowboarder performed several runs equipped with two custom-made force plates as well as a full-body inertial measurement system. A rigid, multi-segment model was developed to describe the motion and loads within the lower extremities. This model is based on an existing lower-body model and designed to be run by the OpenSim software package. Measured kinetic and kinematic data were imported into the OpenSim program and inverse dynamic calculations were performed. The results illustrate the potential of the developed method for the determination of joint loadings within the lower extremities for complete measurement runs in a real snowboarding environment. The calculated net joint moments of force are reasonable in comparison to the data presented in the literature. A good reliability of the method seems to be indicated by the low data variation between different turns. Due to the unknown accuracy of this method the application for inter-individual studies as well as studies of injury mechanisms may be limited. For intra-individual studies comparing different snowboarding techniques as well as different snowboard equipment the method seems to be beneficial. The validity of the method needs to be studied further.

  7. Characteristics of Operational Space Weather Forecasting: Observations and Models

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Viereck, Rodney; Singer, Howard; Onsager, Terry; Biesecker, Doug; Rutledge, Robert; Hill, Steven; Akmaev, Rashid; Milward, George; Fuller-Rowell, Tim

    2015-04-01

    In contrast to research observations, models and ground support systems, operational systems are characterized by real-time data streams and run schedules, with redundant backup systems for most elements of the system. We review the characteristics of operational space weather forecasting, concentrating on the key aspects of ground- and space-based observations that feed models of the coupled Sun-Earth system at the NOAA/Space Weather Prediction Center (SWPC). Building on the infrastructure of the National Weather Service, SWPC is working toward a fully operational system based on the GOES weather satellite system (constant real-time operation with back-up satellites), the newly launched DSCOVR satellite at L1 (constant real-time data network with AFSCN backup), and operational models of the heliosphere, magnetosphere, and ionosphere/thermosphere/mesophere systems run on the Weather and Climate Operational Super-computing System (WCOSS), one of the worlds largest and fastest operational computer systems that will be upgraded to a dual 2.5 Pflop system in 2016. We review plans for further operational space weather observing platforms being developed in the context of the Space Weather Operations Research and Mitigation (SWORM) task force in the Office of Science and Technology Policy (OSTP) at the White House. We also review the current operational model developments at SWPC, concentrating on the differences between the research codes and the modified real-time versions that must run with zero fault tolerance on the WCOSS systems. Understanding the characteristics and needs of the operational forecasting community is key to producing research into the coupled Sun-Earth system with maximal societal benefit.

  8. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  9. Partitioning the Metabolic Cost of Human Running: A Task-by-Task Approach

    PubMed Central

    Arellano, Christopher J.; Kram, Rodger

    2014-01-01

    Compared with other species, humans can be very tractable and thus an ideal “model system” for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the “cost of generating force” hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be “individually” partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. PMID:24838747

  10. Partitioning the metabolic cost of human running: a task-by-task approach.

    PubMed

    Arellano, Christopher J; Kram, Rodger

    2014-12-01

    Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  11. Catching fly balls in virtual reality: a critical test of the outfielder problem.

    PubMed

    Fink, Philip W; Foo, Patrick S; Warren, William H

    2009-12-14

    How does a baseball outfielder know where to run to catch a fly ball? The "outfielder problem" remains unresolved, and its solution would provide a window into the visual control of action. It may seem obvious that human action is based on an internal model of the physical world, such that the fielder predicts the landing point based on a mental model of the ball's trajectory (TP). However, two alternative theories, Optical Acceleration Cancellation (OAC) and Linear Optical Trajectory (LOT), propose that fielders are led to the right place at the right time by coupling their movements to visual information in a continuous "online" manner. All three theories predict successful catches and similar running paths. We provide a critical test by using virtual reality to perturb the vertical motion of the ball in mid-flight. The results confirm the predictions of OAC but are at odds with LOT and TP.

  12. Aftermarket Performance of Health Care and Biopharmaceutical IPOs: Evidence From ASEAN Countries

    PubMed Central

    Komenkul, Kulabutr; Kiranand, Santi

    2017-01-01

    We examine the evidence from the long-run abnormal returns using data for 76 health care and biopharmaceutical initial public offerings (IPOs) listed in a 29-year period between 1986 and 2014 in the Association of Southeast Asian Nations (ASEAN) countries such as Indonesia, Malaysia, Singapore, Thailand, the Philippines, Vietnam, Myanmar, and Laos. Based on the event-time approach, the 3-year stock returns of the IPOs are investigated using cumulative abnormal return (CAR) and buy-and-hold abnormal return (BHAR). As a robustness check, the calendar-time approach, related to the market model as well as Fama-French and Carhart models, was applied for verifying long-run abnormal returns. We found evidence that the health care IPOs overperform in the long-run, irrespective of the alternative benchmarks and methods. In addition, when we divide our sample into 5 groups by listing countries, our results show that the health care stock prices of the Singaporean firms behaved differently from those of most of the other firms in ASEAN. The Singaporean IPOs are characterized by a worse post-offering performance, whereas the IPOs of Malaysian and Thai health care companies performed better in the long-run. PMID:28853306

  13. Aftermarket Performance of Health Care and Biopharmaceutical IPOs: Evidence From ASEAN Countries.

    PubMed

    Komenkul, Kulabutr; Kiranand, Santi

    2017-01-01

    We examine the evidence from the long-run abnormal returns using data for 76 health care and biopharmaceutical initial public offerings (IPOs) listed in a 29-year period between 1986 and 2014 in the Association of Southeast Asian Nations (ASEAN) countries such as Indonesia, Malaysia, Singapore, Thailand, the Philippines, Vietnam, Myanmar, and Laos. Based on the event-time approach, the 3-year stock returns of the IPOs are investigated using cumulative abnormal return (CAR) and buy-and-hold abnormal return (BHAR). As a robustness check, the calendar-time approach, related to the market model as well as Fama-French and Carhart models, was applied for verifying long-run abnormal returns. We found evidence that the health care IPOs overperform in the long-run, irrespective of the alternative benchmarks and methods. In addition, when we divide our sample into 5 groups by listing countries, our results show that the health care stock prices of the Singaporean firms behaved differently from those of most of the other firms in ASEAN. The Singaporean IPOs are characterized by a worse post-offering performance, whereas the IPOs of Malaysian and Thai health care companies performed better in the long-run.

  14. Asymmetry in Determinants of Running Speed During Curved Sprinting.

    PubMed

    Ishimura, Kazuhiro; Sakurai, Shinji

    2016-08-01

    This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.

  15. Modelling field scale spatial variation in water run-off, soil moisture, N2O emissions and herbage biomass of a grazed pasture using the SPACSYS model.

    PubMed

    Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai

    2018-04-01

    In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.

  16. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  17. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Song, Y T; Chao, Y

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less

  18. Error Quantification and Confidence Assessment of Aerothermal Model Predictions for Hypersonic Aircraft (Preprint)

    DTIC Science & Technology

    2013-09-01

    based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D

  19. VTI Driving Simulator: Mathematical Model of a Four-wheeled Vehicle for Simulation in Real Time. VTI Rapport 267A.

    ERIC Educational Resources Information Center

    Nordmark, Staffan

    1984-01-01

    This report contains a theoretical model for describing the motion of a passenger car. The simulation program based on this model is used in conjunction with an advanced driving simulator and run in real time. The mathematical model is complete in the sense that the dynamics of the engine, transmission and steering system is described in some…

  20. Intercomparison of Streamflow Simulations between WRF-Hydro and Hydrology Laboratory-Research Distributed Hydrologic Model Frameworks

    NASA Astrophysics Data System (ADS)

    KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.

    2017-12-01

    The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.

  1. Taxation, regulation, and addiction: a demand function for cigarettes based on time-series evidence.

    PubMed

    Keeler, T E; Hu, T W; Barnett, P G; Manning, W G

    1993-04-01

    This work analyzes the effects of prices, taxes, income, and anti-smoking regulations on the consumption of cigarettes in California (a 25-cent-per-pack state tax increase in 1989 enhances the usefulness of this exercise). Analysis is based on monthly time-series data for 1980 through 1990. Results show a price elasticity of demand for cigarettes in the short run of -0.3 to -0.5 at mean data values, and -0.5 to -0.6 in the long run. We find at least some support for two further hypotheses: that antismoking regulations reduce cigarette consumption, and that consumers behave consistently with the model of rational addiction.

  2. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  3. Volume 2: Compendium of Abstracts

    DTIC Science & Technology

    2017-06-01

    simulation work using a standard running model for legged systems, the Spring Loaded Inverted Pendulum (SLIP) Model. In this model, the dynamics of a single...bar SLIP model is analyzed using a basin of attraction analyses to determine the optimal configuration for running at different velocities and...acquisition, and the automatic target acquisition were then compared to each other. After running trials with the current system, it will be

  4. On the effect of unsupported sleepers on the dynamic behaviour of a railway track

    NASA Astrophysics Data System (ADS)

    Zhu, J. Y.; Thompson, D. J.; Jones, C. J. C.

    2011-09-01

    The effect of unsupported sleepers on the dynamic behaviour of a railway track is studied based on vehicle-track dynamic interaction theory, using a model of the track as a Timoshenko beam supported on a periodic elastic foundation. Considering the vehicle's running speed and the number of unsupported sleepers, the track dynamic characteristics are investigated and verified in the time and frequency domains by experiments on a 1:5 scale model wheel-rail test rig. The results show that when hanging sleepers are present, leading to a discontinuous and irregular track support, additional wheel-rail interaction forces are generated. These forces increase as further sleepers become unsupported and as the vehicle's running speed increases. The adjacent supports experience increased dynamic forces which will lead to further deterioration of track quality and the formation of long wavelength track irregularities, which worsen the vehicles' running stability and riding comfort. Stationary transfer functions measurements of the dynamic behaviour of the track are also presented to support the findings.

  5. IGT-Open: An open-source, computerized version of the Iowa Gambling Task.

    PubMed

    Dancy, Christopher L; Ritter, Frank E

    2017-06-01

    The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture.

  6. Simplified Predictive Models for CO 2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Jin, Larry; He, Jincong

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less

  7. Pushing HTCondor and glideinWMS to 200K+ Jobs in a Global Pool for CMS before Run 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Belforte, S.; Bockelman, B.

    2015-12-23

    The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The combined pool size is expected to reach 200K job slots, which is significantly bigger than any other multi-user HTCondor based system currently in production. To get there we have studied scaling limitations in our existing pools, themore » biggest of which tops out at about 70K slots, providing valuable feedback to the development communities, who have responded by delivering improvements which have helped us reach higher and higher scales with more stability. We have also worked on improving the organization and support model for this critical service during Run 2 of the LHC. This contribution will present the results of the scale testing and experiences from the first months of running the Global Pool.« less

  8. Models@Home: distributed computing in bioinformatics using a screensaver based approach.

    PubMed

    Krieger, Elmar; Vriend, Gert

    2002-02-01

    Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.

  9. Real-time SWMF-Geospace at CCMC: assessing the quality of output from continuous operational simulations

    NASA Astrophysics Data System (ADS)

    Liemohn, M. W.; Welling, D. T.; De Zeeuw, D.; Kuznetsova, M. M.; Rastaetter, L.; Ganushkina, N. Y.; Ilie, R.; Toth, G.; Gombosi, T. I.; van der Holst, B.

    2016-12-01

    The ground-based magnetometer index Dst is a decent measure of the near-Earth current systems, in particular those in the storm-time inner magnetosphere. The ability of a large-scale, physics-based model to reproduce, or even predict, this index is therefore a tangible measure of the overall validity of the code for space weather research and space weather operational usage. Experimental real-time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC), with results available there (http://ccmc.gsfc.nasa.gov/realtime.php), through the CCMC Integrated Space Weather Analysis (iSWA) site (http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/), and the Michigan SWMF site (http://csem.engin.umich.edu/realtime). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the BATS-R-US magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model for inner magnetospheric drift physics. While both have been running for several years, nearly continuous results are available since July 2015. Dst from the model output is compared against the Kyoto real-time Dst. Various quantitative measures are presented to assess the goodness of fit between the models and observations. In particular, correlation coefficients, RMSE and prediction efficiency are calculated and discussed. In addition, contingency tables are presented, demonstrating the ability of the model to predict "disturbed times" as defined by Dst values below some critical threshold. It is shown that the SWMF run with the inner magnetosphere model is significantly better at reproducing storm-time values, with prediction efficiencies above 0.25 and Heidke skill scores above 0.5. This work was funded by NASA and NSF grants, and the European Union's Horizon 2020 research and innovation programme under grant agreement 637302 PROGRESS.

  10. Simulating Future Changes in Spatio-temporal Precipitation by Identifying and Characterizing Individual Rainstorm Events

    NASA Astrophysics Data System (ADS)

    Chang, W.; Stein, M.; Wang, J.; Kotamarthi, V. R.; Moyer, E. J.

    2015-12-01

    A growing body of literature suggests that human-induced climate change may cause significant changes in precipitation patterns, which could in turn influence future flood levels and frequencies and water supply and management practices. Although climate models produce full three-dimensional simulations of precipitation, analyses of model precipitation have focused either on time-averaged distributions or on individual timeseries with no spatial information. We describe here a new approach based on identifying and characterizing individual rainstorms in either data or model output. Our approach enables us to readily characterize important spatio-temporal aspects of rainstorms including initiation location, intensity (mean and patterns), spatial extent, duration, and trajectory. We apply this technique to high-resolution precipitation over the continental U.S. both from radar-based observations (NCEP Stage IV QPE product, 1-hourly, 4 km spatial resolution) and from model runs with dynamical downscaling (WRF regional climate model, 3-hourly, 12 km spatial resolution). In the model studies we investigate the changes in storm characteristics under a business-as-usual warming scenario to 2100 (RCP 8.5). We find that in these model runs, rainstorm intensity increases as expected with rising temperatures (approximately 7%/K, following increased atmospheric moisture content), while total precipitation increases by a lesser amount (3%/K), consistent with other studies. We identify for the first time the necessary compensating mechanism: in these model runs, individual precipitation events become smaller. Other aspects are approximately unchanged in the warmer climate. Because these spatio-temporal changes in rainfall patterns would impact regional hydrology, it is important that they be accurately incorporated into any impacts assessment. For this purpose we have developed a methodology for producing scenarios of future precipitation that combine observational data and model-projected changes. We statistically describe the future changes in rainstorm characteristics suggested by the WRF model and apply those changes to observational data. The resulting high spatial and temporal resolution scenarios have immediate applications for impacts assessment and adaptation studies.

  11. On the running of the spectral index to all orders: a new model-dependent approach to constrain inflationary models

    NASA Astrophysics Data System (ADS)

    Zarei, Moslem

    2016-06-01

    In conventional model-independent approaches, the power spectrum of primordial perturbations is characterized by such free parameters as the spectral index, its running, the running of running, and the tensor-to-scalar ratio. In this work we show that, at least for simple inflationary potentials, one can find the primordial scalar and tensor power spectra exactly by resumming over all the running terms. In this model-dependent method, we expand the power spectra about the pivot scale to find the series terms as functions of the e-folding number for some single field models of inflation. Interestingly, for the viable models studied here, one can sum over all the terms and evaluate the exact form of the power spectra. This in turn gives more accurate parametrization of the specific models studied in this work. We finally compare our results with recent cosmic microwave background data to find that our new power spectra are in good agreement with the data.

  12. Long-run evolution of the global economy: 2. Hindcasts of innovation and growth

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2015-03-01

    Long-range climate forecasts rely upon integrated assessment models that link the global economy to greenhouse gas emissions. This paper evaluates an alternative economic framework, outlined in Part 1, that is based on physical principles rather than explicitly resolved societal dynamics. Relative to a reference model of persistence in trends, model hindcasts that are initialized with data from 1950 to 1960 reproduce trends in global economic production and energy consumption between 2000 and 2010 with a skill score greater than 90%. In part, such high skill appears to be because civilization has responded to an impulse of fossil fuel discovery in the mid-twentieth century. Forecasting the coming century will be more of a challenge because the effect of the impulse appears to have nearly run its course. Nonetheless, the model offers physically constrained futures for the coupled evolution of civilization and climate during the Anthropocene.

  13. The application of connectionism to query planning/scheduling in intelligent user interfaces

    NASA Technical Reports Server (NTRS)

    Short, Nicholas, Jr.; Shastri, Lokendra

    1990-01-01

    In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.

  14. Simulation for Grid Connected Wind Turbines with Fluctuating

    NASA Astrophysics Data System (ADS)

    Ye, Ying; Fu, Yang; Wei, Shurong

    This paper establishes the whole dynamic model of wind turbine generator system which contains the wind speed model and DFIG wind turbines model .A simulation sample based on the mathematical models is built by using MATLAB in this paper. Research are did on the performance characteristics of doubly-fed wind generators (DFIG) which connected to power grid with three-phase ground fault and the disturbance by gust and mixed wind. The capacity of the wind farm is 9MW which consists of doubly-fed wind generators (DFIG). Simulation results demonstrate that the three-phase ground fault occurs on grid side runs less affected on the stability of doubly-fed wind generators. However, as a power source, fluctuations of the wind speed will run a large impact on stability of double-fed wind generators. The results also show that if the two disturbances occur in the meantime, the situation will be very serious.

  15. An Anticipatory Model of Cavitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B., Jr.; Hylton, J.O.

    1999-04-05

    The Anticipatory System (AS) formalism developed by Robert Rosen provides some insight into the problem of embedding intelligent behavior in machines. AS emulates the anticipatory behavior of biological systems. AS bases its behavior on its expectations about the near future and those expectations are modified as the system gains experience. The expectation is based on an internal model that is drawn from an appeal to physical reality. To be adaptive, the model must be able to update itself. To be practical, the model must run faster than real-time. The need for a physical model and the requirement that the modelmore » execute at extreme speeds, has held back the application of AS to practical problems. Two recent advances make it possible to consider the use of AS for practical intelligent sensors. First, advances in transducer technology make it possible to obtain previously unavailable data from which a model can be derived. For example, acoustic emissions (AE) can be fed into a Bayesian system identifier that enables the separation of a weak characterizing signal, such as the signature of pump cavitation precursors, from a strong masking signal, such as a pump vibration feature. The second advance is the development of extremely fast, but inexpensive, digital signal processing hardware on which it is possible to run an adaptive Bayesian-derived model faster than real-time. This paper reports the investigation of an AS using a model of cavitation based on hydrodynamic principles and Bayesian analysis of data from high-performance AE sensors.« less

  16. Time-dependent onshore tsunami response

    USGS Publications Warehouse

    Apotsos, Alex; Gelfenbaum, Guy R.; Jaffe, Bruce E.

    2012-01-01

    While bulk measures of the onshore impact of a tsunami, including the maximum run-up elevation and inundation distance, are important for hazard planning, the temporal evolution of the onshore flow dynamics likely controls the extent of the onshore destruction and the erosion and deposition of sediment that occurs. However, the time-varying dynamics of actual tsunamis are even more difficult to measure in situ than the bulk parameters. Here, a numerical model based on the non-linear shallow water equations is used to examine the effects variations in the wave characteristics, bed slope, and bottom roughness have on the temporal evolution of the onshore flow. Model results indicate that the onshore flow dynamics vary significantly over the parameter space examined. For example, the flow dynamics over steep, smooth morphologies tend to be temporally symmetric, with similar magnitude velocities generated during the run-up and run-down phases of inundation. Conversely, on shallow, rough onshore topographies the flow dynamics tend to be temporally skewed toward the run-down phase of inundation, with the magnitude of the flow velocities during run-up and run-down being significantly different. Furthermore, for near-breaking tsunami waves inundating over steep topography, the flow velocity tends to accelerate almost instantaneously to a maximum and then decrease monotonically. Conversely, when very long waves inundate over shallow topography, the flow accelerates more slowly and can remain steady for a period of time before beginning to decelerate. These results indicate that a single set of assumptions concerning the onshore flow dynamics cannot be applied to all tsunamis, and site specific analyses may be required.

  17. Evaluating the spatiotemporal variations of water budget across China over 1951-2006 using IBIS model

    USGS Publications Warehouse

    Zhu, Q.; Jiang, H.; Liu, J.; Wei, X.; Peng, C.; Fang, X.; Liu, S.; Zhou, G.; Yu, S.; Ju, W.

    2010-01-01

    The Integrated Biosphere Simulator is used to evaluate the spatial and temporal patterns of the crucial hydrological variables [run-off and actual evapotranspiration (AET)] of the water balance across China for the period 1951–2006 including a precipitation analysis. Results suggest three major findings. First, simulated run-off captured 85% of the spatial variability and 80% of the temporal variability for 85 hydrological gauges across China. The mean relative errors were within 20% for 66% of the studied stations and within 30% for 86% of the stations. The Nash–Sutcliffe coefficients indicated that the quantity pattern of run-off was also captured acceptably except for some watersheds in southwestern and northwestern China. The possible reasons for underestimation of run-off in the Tibetan plateau include underestimation of precipitation and uncertainties in other meteorological data due to complex topography, and simplified representations of the soil depth attribute and snow processes in the model. Second, simulated AET matched reasonably with estimated values calculated as the residual of precipitation and run-off for watersheds controlled by the hydrological gauges. Finally, trend analysis based on the Mann–Kendall method indicated that significant increasing and decreasing patterns in precipitation appeared in the northwest part of China and the Yellow River region, respectively. Significant increasing and decreasing trends in AET were detected in the Southwest region and the Yangtze River region, respectively. In addition, the Southwest region, northern China (including the Heilongjiang, Liaohe, and Haihe Basins), and the Yellow River Basin showed significant decreasing trends in run-off, and the Zhemin hydrological region showed a significant increasing trend.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  19. Results of an aerodynamic force and moment investigation of an 0.015-scale configuration 3 space shuttle orbiter in the NASA/ARC 3.5-foot hypersonic wind tunnel (OA58)

    NASA Technical Reports Server (NTRS)

    Dziubala, T. J.; Cleary, J. W.

    1974-01-01

    The primary objective of the test was to obtain stability and control data for the basic configuration and an alternate configuration for the Space Shuttle Orbiter. Pitch runs were made with 0 deg of sideslip at Mach numbers of 5.3, 7.3 and 10.3. Six-component force data and fuselage base pressures were recorded for each run. Shadowgraph pictures were taken at selected points. Model 420 was used for the tests.

  20. Chemistry-Climate Interactions in the Goddard Institute for Space Studies General Circulation Model. 2; New Insights into Modeling the Pre-Industrial Atmosphere

    NASA Technical Reports Server (NTRS)

    Grenfell, J. Lee; Shindell, D. T.; Koch, D.; Rind, D.; Hansen, James E. (Technical Monitor)

    2002-01-01

    We investigate the chemical (hydroxyl and ozone) and dynamical response to changing from present day to pre-industrial conditions in the Goddard Institute for Space Studies General Circulation Model (GISS GMC). We identify three main improvements not included by many other works. Firstly, our model includes interactive cloud calculations. Secondly we reduce sulfate aerosol which impacts NOx partitioning hence Ox distributions. Thirdly we reduce sea surface temperatures and increase ocean ice coverage which impact water vapor and ground albedo respectively. Changing the ocean data (hence water vapor and ozone) produces a potentially important feedback between the Hadley circulation and convective cloud cover. Our present day run (run 1, control run) global mean OH value was 9.8 x 10(exp 5) molecules/cc. For our best estimate of pre-industrial conditions run (run 2) which featured modified chemical emissions, sulfate aerosol and sea surface temperatures/ocean ice, this value changed to 10.2 x 10(exp 5) molecules/cc. Reducing only the chemical emissions to pre-industrial levels in run 1 (run 3) resulted in this value increasing to 10.6 x 10(exp 5) molecules/cc. Reducing the sulfate in run 3 to pre-industrial levels (run 4) resulted in a small increase in global mean OH (10.7 x 10(exp 5) molecules/cc). Changing the ocean data in run 4 to pre-industrial levels (run 5) led to a reduction in this value to 10.3 x 10(exp 5) molecules/cc. Mean tropospheric ozone burdens were 262, 181, 180, 180, and 182 Tg for runs 1-5 respectively.

  1. Cumulative dietary exposure to a selected group of pesticides of the triazole group in different European countries according to the EFSA guidance on probabilistic modelling.

    PubMed

    Boon, Polly E; van Donkersgoed, Gerda; Christodoulou, Despo; Crépet, Amélie; D'Addezio, Laura; Desvignes, Virginie; Ericsson, Bengt-Göran; Galimberti, Francesco; Ioannou-Kakouri, Eleni; Jensen, Bodil Hamborg; Rehurkova, Irena; Rety, Josselin; Ruprich, Jiri; Sand, Salomon; Stephenson, Claire; Strömberg, Anita; Turrini, Aida; van der Voet, Hilko; Ziegler, Popi; Hamey, Paul; van Klaveren, Jacob D

    2015-05-01

    The practicality was examined of performing a cumulative dietary exposure assessment according to the requirements of the EFSA guidance on probabilistic modelling. For this the acute and chronic cumulative exposure to triazole pesticides was estimated using national food consumption and monitoring data of eight European countries. Both the acute and chronic cumulative dietary exposures were calculated according to two model runs (optimistic and pessimistic) as recommended in the EFSA guidance. The exposures obtained with these model runs differed substantially for all countries, with the highest exposures obtained with the pessimistic model run. In this model run, animal commodities including cattle milk and different meat types, entered in the exposure calculations at the level of the maximum residue limit (MRL), contributed most to the exposure. We conclude that application of the optimistic model run on a routine basis for cumulative assessments is feasible. The pessimistic model run is laborious and the exposure results could be too far from reality. More experience with this approach is needed to stimulate the discussion of the feasibility of all the requirements, especially the inclusion of MRLs of animal commodities which seem to result in unrealistic conclusions regarding their contribution to the dietary exposure. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, Natalie

    2016-09-30

    COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less

  3. Urban Land: Study of Surface Run-off Composition and Its Dynamics

    NASA Astrophysics Data System (ADS)

    Palagin, E. D.; Gridneva, M. A.; Bykova, P. G.

    2017-11-01

    The qualitative composition of urban land surface run-off is liable to significant variations. To study surface run-off dynamics, to examine its behaviour and to discover reasons of these variations, it is relevant to use the mathematical apparatus technique of time series analysis. A seasonal decomposition procedure was applied to a temporary series of monthly dynamics with the annual frequency of seasonal variations in connection with a multiplicative model. The results of the quantitative chemical analysis of surface wastewater of the 22nd Partsjezd outlet in Samara for the period of 2004-2016 were used as basic data. As a result of the analysis, a seasonal pattern of variations in the composition of surface run-off in Samara was identified. Seasonal indices upon 15 waste-water quality indicators were defined. BOD (full), suspended materials, mineralization, chlorides, sulphates, ammonium-ion, nitrite-anion, nitrate-anion, phosphates (phosphorus), iron general, copper, zinc, aluminium, petroleum products, synthetic surfactants (anion-active). Based on the seasonal decomposition of the time series data, the contribution of trends, seasonal and accidental components of the variability of the surface run-off indicators was estimated.

  4. Indirect downscaling of global circulation model data based on atmospheric circulation and temperature for projections of future precipitation in hourly resolution

    NASA Astrophysics Data System (ADS)

    Beck, F.; Bárdossy, A.

    2013-07-01

    Many hydraulic applications like the design of urban sewage systems require projections of future precipitation in high temporal resolution. We developed a method to predict the regional distribution of hourly precipitation sums based on daily mean sea level pressure and temperature data from a Global Circulation Model. It is an indirect downscaling method avoiding uncertain precipitation data from the model. It is based on a fuzzy-logic classification of atmospheric circulation patterns (CPs) that is further subdivided by means of the average daily temperature. The observed empirical distributions at 30 rain gauges to each CP-temperature class are assumed as constant and used for projections of the hourly precipitation sums in the future. The method was applied to the CP-temperature sequence derived from the 20th century run and the scenario A1B run of ECHAM5. According to ECHAM5, the summers in southwest Germany will become progressively drier. Nevertheless, the frequency of the highest hourly precipitation sums will increase. According to the predictions, estival water stress and the risk of extreme hourly precipitation will both increase simultaneously during the next decades.

  5. AUTO-MUTE 2.0: A Portable Framework with Enhanced Capabilities for Predicting Protein Functional Consequences upon Mutation.

    PubMed

    Masso, Majid; Vaisman, Iosif I

    2014-01-01

    The AUTO-MUTE 2.0 stand-alone software package includes a collection of programs for predicting functional changes to proteins upon single residue substitutions, developed by combining structure-based features with trained statistical learning models. Three of the predictors evaluate changes to protein stability upon mutation, each complementing a distinct experimental approach. Two additional classifiers are available, one for predicting activity changes due to residue replacements and the other for determining the disease potential of mutations associated with nonsynonymous single nucleotide polymorphisms (nsSNPs) in human proteins. These five command-line driven tools, as well as all the supporting programs, complement those that run our AUTO-MUTE web-based server. Nevertheless, all the codes have been rewritten and substantially altered for the new portable software, and they incorporate several new features based on user feedback. Included among these upgrades is the ability to perform three highly requested tasks: to run "big data" batch jobs; to generate predictions using modified protein data bank (PDB) structures, and unpublished personal models prepared using standard PDB file formatting; and to utilize NMR structure files that contain multiple models.

  6. Impacts of the driver's bounded rationality on the traffic running cost under the car-following model

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai

    2016-09-01

    The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.

  7. The long-run dynamic relationship between exchange rate and its attention index: Based on DCCA and TOP method

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Guo, Kun; Lu, Xiaolin

    2016-07-01

    The behavior information of financial market plays a more and more important role in modern economic system. The behavior information reflected in INTERNET search data has already been used in short-term prediction for exchange rate, stock market return, house price and so on. However, the long-run relationship between behavior information and financial market fluctuation has not been studied systematically. Further, most traditional statistic methods and econometric models could not catch the dynamic and non-linear relationship. An attention index of CNY/USD exchange rate is constructed based on search data from 360 search engine of China in this paper. Then the DCCA and Thermal Optimal Path methods are used to explore the long-run dynamic relationship between CNY/USD exchange rate and the corresponding attention index. The results show that the significant interdependency exists and the change of exchange rate is 1-2 days lag behind the attention index.

  8. Heat uptake in the Southern Ocean in a warmer, windier world: a process-based analysis using an AOGCM with an eddy-permitting ocean

    NASA Astrophysics Data System (ADS)

    Kuhlbrodt, T.; Gregory, J. M.

    2016-02-01

    About 90% of the anthropogenic increase in heat stored in the climate system is found the oceans. Therefore it is relevant to understand the details of ocean heat uptake. Here we present a detailed, process-based analysis of ocean heat uptake (OHU) processes in HiGEM1.2, an atmosphere-ocean general circulation model (AOGCM) with an eddy-permitting ocean component of 1/3° resolution. Similarly to various other models, HiGEM1.2 shows that the global heat budget is dominated by a downward advection of heat compensated by upward isopycnal diffusion. This upward isopycnal diffusion of heat is located mostly in the Southern Ocean (Fig. 1a).We compare the responses to a 4xCO2 forcing and an enhancement of the windstress forcing in the Southern Ocean. In line with the CMIP5 models, HiGEM1.2 shows a band of strong OHU in the mid-latitude Southern Ocean in the 4xCO2 run, which is mostly advective. By contrast, in the high-latitude Southern Ocean regions it is the suppression of convection that leads to OHU (Fig. 1b). In the enhanced windstress run, convection is strengthened at high Southern latitudes (Fig. 1c), leading to heat loss, while the magnitude of the OHU in the Southern mid-latitudes is very similar to the 4xCO2 results. Remarkably, there is only very small global OHU in the enhanced windstress run. The wind stress forcing just leads to a redistribution of heat. We relate the ocean changes at high southern latitudes to the effect of climate change on the Antarctic Circumpolar Current (ACC). It weakens in the 4xCO2 run and strengthens in the wind stress run. The weakening is due to a narrowing of the ACC, caused by an expansion of the Weddell Gyre, and a flattening of the isopycnals, which are explained by a combination of the wind stress forcing and increased precipitation. The presentation will also try to clarify the definitions of terms like "advective", "diffusive" and "eddy-induced" when used for observed and modelled (at various resolutions) ocean heat uptake processes. Fig. 1: Horizontally averaged temperature tendency diagnostics for the high-latitude Southern Ocean, for (a) the control run, (b) the 4xCO2 anomalies and (c) the windstress anomalies. Both axes are scaled according to a power law. "VM"- vertical mixing, which includes convection ("conv").

  9. Evaluation of the coupled COSMO-CLM+NEMO-Nordic model with focus on North and Baltic seas

    NASA Astrophysics Data System (ADS)

    Lenhardt, J.; Pham, T. V.; Früh, B.; Brauch, J.

    2017-12-01

    The region east of the Baltic Sea has been identified as a hot-spot of climate change by Giorgi, 2006, on the base of temperature and precipitation variability. For this purpose, the atmosphere model COSMO-CLM has been coupled to the ocean model NEMO, including the sea ice model LIM3, via the OASIS3-MCT coupler (Pham et al., 2014). The coupler interpolates heat, fresh water, momentum fluxes, sea level pressure and the fraction of sea ice at the interface in space and time. Our aim is to find an optimal configuration of the already existing coupled regional atmospheric-ocean model COSMO-CLM+NEMO-Nordic. So far results for the North- and Baltic seas show that the coupled run has large biases compared with the E-OBS reference data. Therefore, additional simulation evaluations are planned by the use of independent satellite observation data (e.g. Copernicus, EURO4M). We have performed a series of runs with the coupled COSMO-CLM+NEMO-Nordic model to find out about differences of model outputs due to different coupling time steps. First analyses of COSMO-CLM 2m temperatures let presume that different coupling time steps have an impact on the results of the coupled model run. Additional tests over a longer period of time are conducted to understand whether the signal-to-noise ratio could influence the bias. The results will be presented in our poster.

  10. PNNL: Climate Modelling

    Science.gov Websites

    Runs [ Open Access : Password Protected ] CESM Development CESM Runs [ Open Access : Password Protected ] WRF Development WRF Runs [ Open Access : Password Protected ] Climate Modeling Home Projects Links Literature Manuscripts Publications Polar Group Meeting (2012) ASGC Home ASGC Jobs Web Calendar Wiki Internal

  11. Possible options to slow down the advancement rate of Tarbela delta.

    PubMed

    Habib-Ur-Rehman; Rehman, Mirza Abdul; Naeem, Usman Ali; Hashmi, Hashim Nisar; Shakir, Abdul Sattar

    2017-12-22

    The pivot point of delta in Tarbela dam has reached at about 10.6 km from the dam face which may result in blocking of tunnels. Tarbela delta was modeled from 1979 to 2060 using hec-6 model. Initially, the model was calibrated for year 1999 and validated for years 2000, 2001, 2002, and 2006 by involving the data of sediment concentration, reservoir cross sections (73 range lines), elevation-area capacity curves, and inflows and outflows from the reservoir. Then, the model was used to generate future scenarios, i.e., run-1, run-2, and run-3 with pool levels; 428, 442, and 457 m, respectively, till 2060. Results of run-1 and run-2 showed advancement to choke the tunnels by 2010 and 2030, respectively. Finally, in run-3, the advancement was further delayed showing that tunnels 1 and 2 will be choked by year 2050 and pivot point will reach at 6.4 km from the dam face.

  12. The seasonal-cycle climate model

    NASA Technical Reports Server (NTRS)

    Marx, L.; Randall, D. A.

    1981-01-01

    The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.

  13. Turbulent Chemical Interaction Models in NCC: Comparison

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Liu, Nan-Suey

    2006-01-01

    The performance of a scalar PDF hydrogen-air combustion model in predicting a complex reacting flow is evaluated. In addition the results are compared to those obtained by running the same case with the so-called laminar chemistry model and also a new model based on the concept of mapping partially stirred reactor data onto perfectly stirred reactor data. The results show that the scalar PDF model produces significantly different results from the other two models, and at a significantly higher computational cost.

  14. Getting Medical Directors Out of the C-Suite and Closer to Points of Care.

    PubMed

    Schlosser, Michael

    2016-11-01

    Physicians, more than anyone else, can influence peers when it comes to talking about evidence-based care, even when it runs counter to customary, but costly, practice patterns. The timing couldn't be better to put physicians in this leadership role because of the growing use of value-based payment models.

  15. An Integrated Nurse Practitioner-Run Subspecialty Referral Program for Incontinent Children.

    PubMed

    Jarczyk, Kimberly S; Pieper, Pam; Brodie, Lori; Ezzell, Kelly; D'Alessandro, Tina

    Evidence suggests that urinary and fecal incontinence and abnormal voiding and defecation dynamics are different manifestations of the same syndrome. This article reports the success of an innovative program for care of children with incontinence and dysfunctional elimination. This program is innovative because it is the first to combine subspecialty services (urology, gastroenterology, and psychiatry) in a single point of care for this population and the first reported independent nurse practitioner-run specialty referral practice in a free-standing pediatric ambulatory subspecialty setting. Currently, services for affected children are siloed in the aforementioned subspecialties, fragmenting care. Retrospective data on financial, patient satisfaction, and patient referral base were compiled to assess this program. Analysis indicates that this model is fiscally sound, has similar or higher patient satisfaction scores when measured against physician-run subspecialty clinics, and has an extensive geographic referral base in the absence of marketing. This model has potential transformative significance: (a) the impact of children achieving continence cannot be underestimated, (b) configuration of services that cross traditional subspecialty boundaries may have broader application to other populations, and (c) demonstration of effectiveness of non-physician provider reconfiguration of health care delivery in subspecialty practice may extend to the care of other populations. Copyright © 2017 National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved.

  16. Pesticide exposure assessment for surface waters in the EU. Part 2: Determination of statistically based run-off and drainage scenarios for Germany.

    PubMed

    Bach, Martin; Diesner, Mirjam; Großmann, Dietlinde; Guerniche, Djamal; Hommen, Udo; Klein, Michael; Kubiak, Roland; Müller, Alexandra; Preuss, Thomas G; Priegnitz, Jan; Reichenberger, Stefan; Thomas, Kai; Trapp, Matthias

    2017-05-01

    In order to assess surface water exposure to active substances of plant protection products (PPPs) in the European Union (EU), the FOCUS (FOrum for the Co-ordination of pesticide fate models and their USe) surface water workgroup introduced four run-off and six drainage scenarios for Step 3 of the tiered FOCUSsw approach. These scenarios may not necessarily represent realistic worst-case situations for the different Member States of the EU. Hence, the suitability of the scenarios for risk assessment in the national authorisation procedures is not known. Using Germany as an example, the paper illustrates how national soil-climate scenarios can be developed to model entries of active substances into surface waters from run-off and erosion (using the model PRZM) and from drainage (using the model MACRO). In the authorisation procedure for PPPs on Member State level, such soil-climate scenarios can be used to determine exposure endpoints with a defined overall percentile. The approach allows the development of national specific soil-climate scenarios and to calculate percentile-based exposure endpoints. The scenarios have been integrated into a software tool analogous to FOCUS-SWASH which can be used in the future to assess surface water exposure in authorisation procedures of PPPs in Germany. © 2017 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2017 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  17. Radiative corrections from heavy fast-roll fields during inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S., E-mail: jain@cp3.dias.sdu.dk, E-mail: sandora@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk

    2015-06-01

    We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less

  18. Radiative corrections from heavy fast-roll fields during inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S.

    2015-06-09

    We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less

  19. Overall Preference of Running Shoes Can Be Predicted by Suitable Perception Factors Using a Multiple Regression Model.

    PubMed

    Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah

    2017-05-01

    This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.

  20. Impact of Targeted Ocean Observations for Improving Ocean Model Initialization for Coupled Hurricane Forecasting

    NASA Astrophysics Data System (ADS)

    Halliwell, G. R.; Srinivasan, A.; Kourafalou, V. H.; Yang, H.; Le Henaff, M.; Atlas, R. M.

    2012-12-01

    The accuracy of hurricane intensity forecasts produced by coupled forecast models is influenced by errors and biases in SST forecasts produced by the ocean model component and the resulting impact on the enthalpy flux from ocean to atmosphere that powers the storm. Errors and biases in fields used to initialize the ocean model seriously degrade SST forecast accuracy. One strategy for improving ocean model initialization is to design a targeted observing program using airplanes and in-situ devices such as floats and drifters so that assimilation of the additional data substantially reduces errors in the ocean analysis system that provides the initial fields. Given the complexity and expense of obtaining these additional observations, observing system design methods such as OSSEs are attractive for designing efficient observing strategies. A new fraternal-twin ocean OSSE system based on the HYbrid Coordinate Ocean Model (HYCOM) is used to assess the impact of targeted ocean profiles observed by hurricane research aircraft, and also by in-situ float and drifter deployments, on reducing errors in initial ocean fields. A 0.04-degree HYCOM simulation of the Gulf of Mexico is evaluated as the nature run by determining that important ocean circulation features such as the Loop Current and synoptic cyclones and anticyclones are realistically simulated. The data-assimilation system is run on a 0.08-degree HYCOM mesh with substantially different model configuration than the nature run, and it uses a new ENsemble Kalman Filter (ENKF) algorithm optimized for the ocean model's hybrid vertical coordinates. The OSSE system is evaluated and calibrated by first running Observing System Experiments (OSEs) to evaluate existing observing systems, specifically quantifying the impact of assimilating more than one satellite altimeter, and also the impact of assimilating targeted ocean profiles taken by the NOAA WP-3D hurricane research aircraft in the Gulf of Mexico during the Deepwater Horizon oil spill. OSSE evaluation and calibration is then performed by repeating these two OSEs with synthetic observations and comparing the resulting observing system impact to determine if it differs from the OSE results. OSSEs are first run to evaluate different airborne sampling strategies with respect to temporal frequency of flights and the horizontal separation of upper-ocean profiles during each flight. They are then run to assess the impact of releasing multiple floats and gliders. Evaluation strategy focuses on error reduction in fields important for hurricane forecasting such as the structure of ocean currents and eddies, upper ocean heat content distribution, and upper-ocean stratification.

  1. Estimation of a Stopping Criterion for Geophysical Granular Flows Based on Numerical Experimentation

    NASA Astrophysics Data System (ADS)

    Yu, B.; Dalbey, K.; Bursik, M.; Patra, A.; Pitman, E. B.

    2004-12-01

    Inundation area may be the most important factor for mitigation of natural hazards related to avalanches, debris flows, landslides and pyroclastic flows. Run-out distance is the key parameter for inundation because the front deposits define the leading edge of inundation. To define the run-out distance, it is necessary to know when a flow stops. Numerical experiments are presented for determining a stopping criterion and exploring the suitability of a Savage-Hutter granular model for computing inundation areas of granular flows. The TITAN2D model was employed to run numerical experiments based on the Savage-Hutter theory. A potentially reasonable stopping criterion was found as a function of dimensionless average velocity, aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Slumping piles on a horizontal surface and geophysical flows over complex topography were simulated. Several mountainous areas, including Colima volcano (MX), Casita (Nic.), Little Tahoma Peak (WA, USA) and the San Bernardino Mountains (CA, USA) were used to simulate geophysical flows. Volcanic block and ash flows, debris avalanches and debris flows occurred in these areas and caused varying degrees of damage. The areas have complex topography, including locally steep open slopes, sinuous channels, and combinations of these. With different topography and physical scaling, slumping piles and geophysical flows have a somewhat different dependence of dimensionless stopping velocity on power-law constants associated with aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Visual comparison of the details of the inundation area obtained from the TITAN2D model with models that contain some form of viscous dissipation point out weaknesses in the model that are not evident by investigation of the stopping criterion alone.

  2. 21. RW Meyer Sugar Mill: 18761889. Simple, singlecylinder, horizontal, reciprocating ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. RW Meyer Sugar Mill: 1876-1889. Simple, single-cylinder, horizontal, reciprocating steam engine, model no. 1, 5' x 10', 6 hp, 175 rpm. Manufactured by Ames Iron Works, Oswego, New York, 1879. View: Steam engine powered the mill's centrifugals. Steam-feed pipe at top left of engine. Steam exhaust pipe leaves base of engine on right end and projects upwards. The boiler feed and supply pipe running water through the engine's pre-heat system are seen running to the lower left end of the engine. Pulley in the foreground was not used. The centrifugals were powered by a belt running from the flywheel in the background. Ball-type governor and pulley are on left end of the engine. - R. W. Meyer Sugar Mill, State Route 47, Kualapuu, Maui County, HI

  3. Comment on Dissociation between running economy and running performance in elite Kenyan distance runners.

    PubMed

    Santos-Concejero, Jordan; Tucker, Ross

    2016-01-01

    Mooses and colleagues suggest that running economy alone does not explain superior distance running performance in elite Kenyan runners. Whilst we agree with the multi-factorial hypothesis for Kenyan running success, we do not believe that running economy can be overlooked to the extent that it was based on this particular study. Based on the methods used and the range of athletes tested, in this response letter we question whether this study provides any basis for downplaying the influence of running economy or suggesting that other factors compensate for it to enable superior performance.

  4. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  5. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    NASA Astrophysics Data System (ADS)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  6. Data-driven modelling of vertical dynamic excitation of bridges induced by people running

    NASA Astrophysics Data System (ADS)

    Racic, Vitomir; Morin, Jean Benoit

    2014-02-01

    With increasingly popular marathon events in urban environments, structural designers face a great deal of uncertainty when assessing dynamic performance of bridges occupied and dynamically excited by people running. While the dynamic loads induced by pedestrians walking have been intensively studied since the infamous lateral sway of the London Millennium Bridge in 2000, reliable and practical descriptions of running excitation are still very rare and limited. This interdisciplinary study has addressed the issue by bringing together a database of individual running force signals recorded by two state-of-the-art instrumented treadmills and two attempts to mathematically describe the measurements. The first modelling strategy is adopted from the available design guidelines for human walking excitation of structures, featuring perfectly periodic and deterministic characterisation of pedestrian forces presentable via Fourier series. This modelling approach proved to be inadequate for running loads due to the inherent near-periodic nature of the measured signals, a great inter-personal randomness of the dominant Fourier amplitudes and the lack of strong correlation between the amplitudes and running footfall rate. Hence, utilising the database established and motivated by the existing models of wind and earthquake loading, speech recognition techniques and a method of replicating electrocardiogram signals, this paper finally presents a numerical generator of random near-periodic running force signals which can reliably simulate the measurements. Such a model is an essential prerequisite for future quality models of dynamic loading induced by individuals, groups and crowds running under a wide range of conditions, such as perceptibly vibrating bridges and different combinations of visual, auditory and tactile cues.

  7. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  8. North Atlantic Ocean OSSE system development: Nature Run evaluation and application to hurricane interaction with the Gulf Stream

    NASA Astrophysics Data System (ADS)

    Kourafalou, Vassiliki H.; Androulidakis, Yannis S.; Halliwell, George R.; Kang, HeeSook; Mehari, Michael M.; Le Hénaff, Matthieu; Atlas, Robert; Lumpkin, Rick

    2016-11-01

    A high resolution, free-running model has been developed for the hurricane region of the North Atlantic Ocean. The model is evaluated with a variety of observations to ensure that it adequately represents both the ocean climatology and variability over this region, with a focus on processes relevant to hurricane-ocean interactions. As such, it can be used as the "Nature Run" (NR) model within the framework of Observing System Simulation Experiments (OSSEs), designed specifically to improve the ocean component of coupled ocean-atmosphere hurricane forecast models. The OSSE methodology provides quantitative assessment of the impact of specific observations on the skill of forecast models and enables the comprehensive design of future observational platforms and the optimization of existing ones. Ocean OSSEs require a state-of-the-art, high-resolution free-running model simulation that represents the true ocean (the NR). This study concentrates on the development and data based evaluation of the NR model component, which leads to a reliable model simulation that has a dual purpose: (a) to provide the basis for future hurricane related OSSEs; (b) to explore process oriented studies of hurricane-ocean interactions. A specific example is presented, where the impact of Hurricane Bill (2009) on the eastward extension and transport of the Gulf Stream is analyzed. The hurricane induced cold wake is shown in both NR simulation and observations. Interaction of storm-forced currents with the Gulf Stream produced a temporary large reduction in eastward transport downstream from Cape Hatteras and had a marked influence on frontal displacement in the upper ocean. The kinetic energy due to ageostrophic currents showed a significant increase as the storm passed, and then decreased to pre-storm levels within 8 days after the hurricane advanced further north. This is a unique result of direct hurricane impact on a western boundary current, with possible implications on the ocean feedback on hurricane evolution.

  9. Three essays in energy consumption: Time series analyses

    NASA Astrophysics Data System (ADS)

    Ahn, Hee Bai

    1997-10-01

    Firstly, this dissertation investigates that which demand specification is an appropriate model for long-run energy demand between the conventional demand specification and the limited demand specification. In order to determine the components of a stable long-run demand for different sectors of the energy industry, I perform cointegration tests by using the Johansen test procedure. First, I test the conventional demand specification including prices and income as components. Second, I test a limited demand specification only income as a component. The reason for performing these tests is that we can determine that which demand specification is a good long-run predictor of energy consumption between the two demand specifications by using the cointegration tests. Secondly, for the purpose of planning and forecasting energy demand in case of cointegrated system, long-run elasticities are of particular interest. To retrieve the optimal level of energy demand in case of price shock, we need long-run elasticities rather than short-run elasticities. The energy demand study provides valuable information to the energy policy makers who are concerned about the long-run impact of taxes and tariffs. A long-run price elasticity is a primary barometer of the substitution effect between energy and non-energy inputs and long-run income elasticity is an important factor since we can measure the energy demand growing slowly or fast than in the past depending on the magnitude of long-run elasticity. The one other problem in estimating the total energy demand is that there exists an aggregation bias stemming from the process of summation in four different energy types for the total aggregation prices and total aggregation energy consumption. In order to measure the aggregation bias between the Btu aggregation method and the Divisia Index method, i.e., which methodology has less aggregation bias in the long-run, I compare the two estimation results with calculated results estimated on a disaggregated basis. Thus, we can confirm whether or not the theoretically superior methodology has less aggregation bias in empirical estimation. Thirdly, I investigate the causal relationships between energy use and GDP. In order to detect causal relationships both in the long-run and in the short-run, the VECM (Vector Error Correction Model) can be used if there exists cointegration relationships among the variables. I detect the causal effects between energy use and GDP by estimating the VECM based on the multivariate production function including the labor and capital variables.

  10. Future prospects of mass-degenerate Higgs bosons in the C P -conserving two-Higgs-doublet model

    NASA Astrophysics Data System (ADS)

    Bian, Ligong; Chen, Ning; Su, Wei; Wu, Yongcheng; Zhang, Yu

    2018-06-01

    The scenario of two mass-degenerate Higgs bosons within the general two-Higgs-doublet model (2HDM) is revisited. We focus on the global picture when two C P -even Higgs bosons of h and H are nearly mass-degenerate. A global fit to the signal strength of the 125 GeV Higgs measured at the LHC is performed. Based on the best-fit result of the 2HDM mixing angles (α ,β ), theoretical constraints, charged and C P -odd Higgs boson direct search constraints and the electroweak precision constraints are imposed to the 2HDM parameter space. We present the signal predictions of the (4 b ,2 b 2 γ ) channels for the benchmark models at the LHC 14 TeV runs. We also study the direct Higgs boson pair productions at the LHC, and the Z-associated Higgs boson pair production search at the ILC 500 GeV runs, as well as the indirect probes at the CEPC 250 GeV run. We find that the mass-degenerate Higgs boson scenario in the Type-II 2HDM can be fully probed by these future experimental searches.

  11. Recent Advances in the LEWICE Icing Model

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Addy, Gene; Struk, Peter; Bartkus, Tadas

    2015-01-01

    This paper will describe two recent modifications to the Glenn ICE software. First, a capability for modeling ice crystals and mixed phase icing has been modified based on recent experimental data. Modifications have been made to the ice particle bouncing and erosion model. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to ice crystal ice accretions performed in the NRC Research Altitude Test Facility (RATFac). Second, modifications were made to the run back model based on data and observations from thermal scaling tests performed in the NRC Altitude Icing Tunnel.

  12. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    PubMed

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Brief Opportunity to Run Does Not Function as a Reinforcer for Mice Selected for High Daily Wheel-Running Rates

    ERIC Educational Resources Information Center

    Belke, Terry W.; Garland, Theodore, Jr.

    2007-01-01

    Mice from replicate lines, selectively bred based on high daily wheel-running rates, run more total revolutions and at higher average speeds than do mice from nonselected control lines. Based on this difference it was assumed that selected mice would find the opportunity to run in a wheel a more efficacious consequence. To assess this assumption…

  14. Online model checking approach based parameter estimation to a neuronal fate decision simulation model in Caenorhabditis elegans with hybrid functional Petri net with extension.

    PubMed

    Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru

    2011-05-01

    Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.

  15. Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.

    PubMed

    Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas

    2004-08-01

    The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.

  16. Intelligent Predictor of Energy Expenditure with the Use of Patch-Type Sensor Module

    PubMed Central

    Li, Meina; Kwak, Keun-Chang; Kim, Youn-Tae

    2012-01-01

    This paper is concerned with an intelligent predictor of energy expenditure (EE) using a developed patch-type sensor module for wireless monitoring of heart rate (HR) and movement index (MI). For this purpose, an intelligent predictor is designed by an advanced linguistic model (LM) with interval prediction based on fuzzy granulation that can be realized by context-based fuzzy c-means (CFCM) clustering. The system components consist of a sensor board, the rubber case, and the communication module with built-in analysis algorithm. This sensor is patched onto the user's chest to obtain physiological data in indoor and outdoor environments. The prediction performance was demonstrated by root mean square error (RMSE). The prediction performance was obtained as the number of contexts and clusters increased from 2 to 6, respectively. Thirty participants were recruited from Chosun University to take part in this study. The data sets were recorded during normal walking, brisk walking, slow running, and jogging in an outdoor environment and treadmill running in an indoor environment, respectively. We randomly divided the data set into training (60%) and test data set (40%) in the normalized space during 10 iterations. The training data set is used for model construction, while the test set is used for model validation. The experimental results revealed that the prediction error on treadmill running simulation was improved by about 51% and 12% in comparison to conventional LM for training and checking data set, respectively. PMID:23202166

  17. Uptake and storage of anthropogenic CO2 in the pacific ocean estimated using two modeling approaches

    NASA Astrophysics Data System (ADS)

    Li, Yangchun; Xu, Yongfu

    2012-07-01

    A basin-wide ocean general circulation model (OGCM) of the Pacific Ocean is employed to estimate the uptake and storage of anthropogenic CO2 using two different simulation approaches. The simulation (named BIO) makes use of a carbon model with biological processes and full thermodynamic equations to calculate surface water partial pressure of CO2, whereas the other simulation (named PTB) makes use of a perturbation approach to calculate surface water partial pressure of anthropogenic CO2. The results from the two simulations agree well with the estimates based on observation data in most important aspects of the vertical distribution as well as the total inventory of anthropogenic carbon. The storage of anthropogenic carbon from BIO is closer to the observation-based estimate than that from PTB. The Revelle factor in 1994 obtained in BIO is generally larger than that obtained in PTB in the whole Pacific, except for the subtropical South Pacific. This, to large extent, leads to the difference in the surface anthropogenic CO2 concentration between the two runs. The relative difference in the annual uptake between the two runs is almost constant during the integration processes after 1850. This is probably not caused by dissolved inorganic carbon (DIC), but rather by a factor independent of time. In both runs, the rate of change in anthropogenic CO2 fluxes with time is consistent with the rate of change in the growth rate of atmospheric partial pressure of CO2.

  18. Influence of manual therapy on functional mobility after joint injury in a rat model.

    PubMed

    Ruhlen, Rachel L; Snider, Eric J; Sargentini, Neil J; Worthington, Bart D; Singh, Vineet K; Pazdernik, Vanessa K; Johnson, Jane C; Degenhardt, Brian F

    2013-10-01

    Animal models can be used to investigate manual therapy mechanisms, but testing manipulation in animal models is problematic because animals cannot directly report their pain. To develop a rat model of inflammatory joint injury to test the efficacy of manual therapy in reducing nociception and restoring function. The authors induced acute inflammatory joint injury in rats by injecting carrageenan into the ankle and then measured voluntary running wheel activity in treated and untreated rats. Treatments included manual therapy applied to the ankle and knee of the injured limb and several analgesic medications (eg, morphine, ketorolac, prednisone). Intra-articular injection of carrageenan to the ankle produced significant swelling (diameter of the ankle increased by 64% after injection; P=.004) and a robust reduction in voluntary running wheel activity (running distance reduced by 91% compared with controls; P<.001). Injured rats gradually returned to running levels equal to controls over 10 days. Neither manual therapy nor analgesic medications increased running wheel activity relative to untreated rats. Voluntary running wheel activity appears to be an appropriate functional measure to evaluate the impact of an acute inflammatory joint injury. However, efforts to treat the injury did not restore running relative to untreated rats.

  19. The joy of interactive modeling

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert

    2013-04-01

    The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.

  20. CMIP5 models' shortwave cloud radiative response and climate sensitivity linked to the climatological Hadley cell extent

    NASA Astrophysics Data System (ADS)

    Lipat, Bernard R.; Tselioudis, George; Grise, Kevin M.; Polvani, Lorenzo M.

    2017-06-01

    This study analyzes Coupled Model Intercomparison Project phase 5 (CMIP5) model output to examine the covariability of interannual Southern Hemisphere Hadley cell (HC) edge latitude shifts and shortwave cloud radiative effect (SWCRE). In control climate runs, during years when the HC edge is anomalously poleward, most models substantially reduce the shortwave radiation reflected by clouds in the lower midlatitude region (LML; ˜28°S-˜48°S), although no such reduction is seen in observations. These biases in HC-SWCRE covariability are linked to biases in the climatological HC extent. Notably, models with excessively equatorward climatological HC extents have weaker climatological LML subsidence and exhibit larger increases in LML subsidence with poleward HC edge expansion. This behavior, based on control climate interannual variability, has important implications for the CO2-forced model response. In 4×CO2-forced runs, models with excessively equatorward climatological HC extents produce stronger SW cloud radiative warming in the LML region and tend to have larger climate sensitivity values than models with more realistic climatological HC extents.

  1. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  2. Economic growth and CO2 emissions: an investigation with smooth transition autoregressive distributed lag models for the 1800-2014 period in the USA.

    PubMed

    Bildirici, Melike; Ersin, Özgür Ömer

    2018-01-01

    The study aims to combine the autoregressive distributed lag (ARDL) cointegration framework with smooth transition autoregressive (STAR)-type nonlinear econometric models for causal inference. Further, the proposed STAR distributed lag (STARDL) models offer new insights in terms of modeling nonlinearity in the long- and short-run relations between analyzed variables. The STARDL method allows modeling and testing nonlinearity in the short-run and long-run parameters or both in the short- and long-run relations. To this aim, the relation between CO 2 emissions and economic growth rates in the USA is investigated for the 1800-2014 period, which is one of the largest data sets available. The proposed hybrid models are the logistic, exponential, and second-order logistic smooth transition autoregressive distributed lag (LSTARDL, ESTARDL, and LSTAR2DL) models combine the STAR framework with nonlinear ARDL-type cointegration to augment the linear ARDL approach with smooth transitional nonlinearity. The proposed models provide a new approach to the relevant econometrics and environmental economics literature. Our results indicated the presence of asymmetric long-run and short-run relations between the analyzed variables that are from the GDP towards CO 2 emissions. By the use of newly proposed STARDL models, the results are in favor of important differences in terms of the response of CO 2 emissions in regimes 1 and 2 for the estimated LSTAR2DL and LSTARDL models.

  3. A 3PG-based Model to Simulate Delta-13C Content in Three Tree Species in The Mica Creek Experiment Watershed, Idaho

    NASA Astrophysics Data System (ADS)

    Wei, L.; Marshall, J. D.

    2007-12-01

    3PG (Physiological Principles in Predicting Growth), a process-based physiological model of forest productivity, has been widely used and well validated. Based on 3PG, a 3PG-δ13C model to simulate δ13C content in plant tissue is built in this research. 3PG calculates carbon assimilation from utilizable absorbed photosynthetically active radiation (PAR), and calculates stomatal conductance from maximum canopy conductance multiplied by physiological modifier which includes the effect of water vapor deficit and soil water. Then the equation of Farquhar and Sharkey (1982) was used to calculate δ13C content in plant. Five even-aged coniferous forest stands located near Clarkia, Idaho (47°15'N, 115°25'W) in Mica Creek Experimental Watershed, were chosen to test the model, (2 stands had been partial cut (50% canopy removal in 1990) and 3 were uncut). MCEW has been extensively investigated since 1990 and many necessary parameters needed for 3PG are readily available. Each of these sites is located near a UI Meteorological station, which recorded half-hourly climatic data since 2003. These site-specific climatic data were extend to 1991 by correlating with data from a nearby SNOTEL station (SNOwpack TELemetry, NRCS, 47°9' N, 116°16' W). Forest mensuration data were obtained form each stand using variable radius plots (VRP). Three tree species, which consist more than 95% of all trees, were parameterized for 3PG model, including: grand fir (Abies grandis Donn ex D. Don), western red cedar (Thuja plicat Donn ex D. Don a) and Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco). Because 4 out of 5 stands have mixed species, we also used parameters for mixed stands to run the model. To stabilize, the model was initially run under average climatic data for 20 years, and then run under the actual climatic data from 1991 to 2006. As 3PG runs in a monthly time step, monthly δ13C values were calculated first, and then yearly values were calculated by weighted averages. For testing the model, tree cores were collected from each stand and species. Ring-widths of tree cores were measured and cross-dated with a ring-width chronology obtained from MCEW. δ13C contents of tree- ring samples from known year were tested. Preliminary results indicate 3PG-δ13C simulated values are consistent with observed values in tree-rings. δ13C values of modeled species are different: western red cider has the highest delta13C values among the three species and western larch has the lowest.

  4. Investigating fluvial pattern and delta-planform geometry based on varying intervals of flood and interflood

    NASA Astrophysics Data System (ADS)

    Rambo, J. E.; Kim, W.; Miller, K.

    2017-12-01

    Physical modeling of a delta's evolution can represent how changing the intervals of flood and interflood can alter a delta's fluvial pattern and geometry. Here we present a set of six experimental runs in which sediment and water were discharged at constant rates over each experiment. During the "flood" period, both sediment and water were discharged at rates of 0.25 cm3/s and 15 ml/s respectively, and during the "interflood" period, only water was discharged at 7.5 ml/s. The flood periods were only run for 30 minutes to keep the total volume of sediment constant. Run 0 did not have an interflood period and therefore ran with constant sediment and water discharge for the duration of the experiment.The other five runs had either 5, 10, or 15-min intervals of flood with 5, 10, or 15-min intervals of interflood. The experimental results show that Run 0 had the smallest topset area. This is due to a lack of surface reworking that takes place during interflood periods. Run 1 had 15-minute intervals of flood and 15-minute intervals of interflood, and it had the largest topset area. Additionally, the experiments that had longer intervals of interflood than flood had more elongated delta geometries. Wetted fraction color maps were also created to plot channel locations during each run. The maps show that the runs with longer interflood durations had channels occurring predominantly down the middle with stronger incisions; these runs produced deltas with more elongated geometries. When the interflood duration was even longer, however, strong channels started to occur at multiple locations. This increased interflood period allowed for the entire area over the delta's surface to be reworked, thus reducing the downstream slope and allowing channels to be more mobile laterally. Physical modeling of a delta allows us to predict a delta's resulting geometry given a set of conditions. This insight is needed especially with delta's being the home to many populations of people and a habitat for various other species.

  5. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  6. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  7. A finite element analysis of the optimal bending angles in a running loop for mesial translation of a mandibular molar using indirect skeletal anchorage.

    PubMed

    Kim, M-J; Park, J H; Kojima, Y; Tai, K; Chae, J-M

    2018-02-01

    To estimate the optimal bending angles in the running loop for mesial translation of a mandibular second molar using indirect skeletal anchorage and to clarify the mechanics of tipping and rotating the molar. A three-dimensional finite element model was developed for predicting tooth movement, and a mechanical model based on the beam theory was constructed for clarifying the force systems. When using a running loop without bends, the molar tipped mesially 14.4° and lingually 0.6°, rotated counterclockwise 4.1°, and the incisors retracted 0.02 mm and intruded 0.05 mm. These angles were about the same as those estimated by the beam theory. When the amount of tip back and toe-in angles was 11.0°, mesial translation of the molar was achieved, and incisors retracted 0.10 mm and intruded 0.30 mm. Mesial translation of a mandibular second molar without any significant movement of anterior teeth was achieved during protraction by controlling the tip back and toe-in angles and enhancing anterior anchorage with the combined use of a running loop and indirect skeletal anchorage. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Uncertainties in shoreline position analysis: the role of run-up and tide in a gentle slope beach

    NASA Astrophysics Data System (ADS)

    Manno, Giorgio; Lo Re, Carlo; Ciraolo, Giuseppe

    2017-09-01

    In recent decades in the Mediterranean Sea, high anthropic pressure from increasing economic and touristic development has affected several coastal areas. Today the erosion phenomena threaten human activities and existing structures, and interdisciplinary studies are needed to better understand actual coastal dynamics. Beach evolution analysis can be conducted using GIS methodologies, such as the well-known Digital Shoreline Analysis System (DSAS), in which error assessment based on shoreline positioning plays a significant role. In this study, a new approach is proposed to estimate the positioning errors due to tide and wave run-up influence. To improve the assessment of the wave run-up uncertainty, a spectral numerical model was used to propagate waves from deep to intermediate water and a Boussinesq-type model for intermediate water up to the swash zone. Tide effects on the uncertainty of shoreline position were evaluated using data collected by a nearby tide gauge. The proposed methodology was applied to an unprotected, dissipative Sicilian beach far from harbors and subjected to intense human activities over the last 20 years. The results show wave run-up and tide errors ranging from 0.12 to 4.5 m and from 1.20 to 1.39 m, respectively.

  9. The University of Tulsa School for Gifted Children Enaction Curriculum, 1991-92.

    ERIC Educational Resources Information Center

    Hollingsworth, Patricia L., Ed.

    This document summarizes the curriculum at the University of Tulsa School for Gifted Children in Oklahoma. The curriculum is based on enaction theory which postulates that thinking is a matter of running a simulation in one's head and involves three steps: (1) creating a mental model; (2) manipulating that model; and (3) developing a strategy for…

  10. User Manual for Personnel Inventory Aging and Promotion Model

    DTIC Science & Technology

    2009-06-01

    increased by 12. Now, an SQL 9 statement deletes records where [target] = NULL, and the model calculates the number of E8s that need to be promoted to...the run, the [Likelihood] and [Expected] tables are created. The first step in this process is to dy- namically build an SQL statement, based on the...This table has individual-level, longitudinal records. Next, a dy- namically built SQL statement based on the Number of Years, cre- ates a new data

  11. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  12. Local search to improve coordinate-based task mapping

    DOE PAGES

    Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...

    2015-10-31

    We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less

  13. Integration of car-body flexibility into train-track coupling system dynamics analysis

    NASA Astrophysics Data System (ADS)

    Ling, Liang; Zhang, Qing; Xiao, Xinbiao; Wen, Zefeng; Jin, Xuesong

    2018-04-01

    The resonance vibration of flexible car-bodies greatly affects the dynamics performances of high-speed trains. In this paper, we report a three-dimensional train-track model to capture the flexible vibration features of high-speed train carriages based on the flexible multi-body dynamics approach. The flexible car-body is modelled using both the finite element method (FEM) and the multi-body dynamics (MBD) approach, in which the rigid motions are obtained by using the MBD theory and the structure deformation is calculated by the FEM and the modal superposition method. The proposed model is applied to investigate the influence of the flexible vibration of car-bodies on the dynamics performances of train-track systems. The dynamics performances of a high-speed train running on a slab track, including the car-body vibration behaviour, the ride comfort, and the running safety, calculated by the numerical models with rigid and flexible car-bodies are compared in detail. The results show that the car-body flexibility not only significantly affects the vibration behaviour and ride comfort of rail carriages, but also can has an important influence on the running safety of trains. The rigid car-body model underestimates the vibration level and ride comfort of rail vehicles, and ignoring carriage torsional flexibility in the curving safety evaluation of trains is conservative.

  14. Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?

    NASA Technical Reports Server (NTRS)

    Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.

    2006-01-01

    A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.

  15. Generation, propagation and run-up of tsunamis due to the Chicxulub impact event

    NASA Astrophysics Data System (ADS)

    Weisz, R.; Wuennenmann, K.; Bahlburg, H.

    2003-04-01

    The Chicxulub impact event can be investigated in (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the influence of tsunami waves on the coast around the Gulf of Mexico caused by the impact. During an impact two types of tsunamis are generated. The first wave is known as the "rim wave" and is generated in front of the ejecta curtain. The second one is linked to the late modification stage of the impact and results from the collapsing cavity of water. We designate this wave as "collapse wave". The "rim wave" and "collapse wave" are able to propagate over long distances, without a significant loss of wave amplitude. Corresponding to the amplitudes, the waves have a potentially large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for describing this influence. We are utilizing a multimaterial hydrocode (SALE) to simulate the generation of tsunami waves. The propagation of the waves is based on the non-linear shallow water theory, because tsunami waves are defined to be long waves. The position of the coast line varies according to the tsunami run-up and is implemented with open boundary conditions. We show with our investigations (1) the generation of tsunami waves due to shallow water impacts, (2) wave damping during propagation, and (3) the influence of the "rim wave" and the "collapse wave" on the coastal areas. Here, we present our first results from numerical modeling of tsunami waves owing to a Chicxulub sized impactor. The characteristics of the “rim wave” depend on the size of the bolide and the water depth. However, the amplitude and velocity of the “collapse wave” is only determined by the water depth in the impact area. The numerical modeling of the tsunami propagation and run-up is calculated along a section from the impact point towards to the west and gives the moderate damping of both waves and the run-up on the coastal area. As a first approximation, the bathymetric data, used in the wave propagation and run-up, correspond to a linearized bathymetry of the Recent Gulf of Mexico. The linearized bathymetry allows to study the influence of the bathymetry on wave propagation and run-up. Additionally, we give preliminary results of the implementation of the two-dimensional propagation and run-up model for arbitrary bathymetries. The two-dimensional wave propagation model will enable us to more realistically asses the influence of the impact-related tsunamis on the coasts around the Gulf of Mexico due to the Chicxulub impact event.

  16. Older females are at higher risk for medical complications during 21 km road race running: a prospective study in 39 511 race starters--SAFER study III.

    PubMed

    Schwabe, Karen; Schwellnus, Martin P; Derman, Wayne; Swanevelder, Sonja; Jordaan, Esme

    2014-06-01

    The half-marathon (21 km) race is a very popular mass community-based distance running event. It is important to determine risk factors for medical complications during these events, so that prevention programmes can be developed. To determine risk factors associated with medical complications during 21 km road running events. Prospective study. Two Oceans half-marathon (21 km) races. 39 511 starters in the 21 km race. Medical complications (defined as any runner requiring assessment by a doctor at the race medical facility or a local hospital on race day) were recorded over a 4-year study period. Medical complications were subdivided according to the system affected and by final diagnosis. A Poisson regression model was used to determine risk factors for any medical complication and more common specific complications. Independent risk factors for medical complication during 21 km running were older female runners (women >50 vs  ≤50 years; p<0.0001) and year of observation (2008 vs 2011; p=0.0201: 2009 vs 2011: p=0.0019; 2010 vs 2011: p=0.0096). Independent risk factors for specific common medical complications were: postural hypotension (women, slow running pace), musculoskeletal complications (less running experience, slower running pace) and dermatological complications (women). Older female runners are at higher risk of developing medical complications during 21 km road running races. Environmental conditions in a particularly cold climate may also play a role. Less running experience and slower running pace are associated with specific medical complications. Medical staff can now plan appropriate care on race days, and interventions can be developed to reduce the risk of medical complications in 21 km races.

  17. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  18. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    PubMed Central

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  19. The effect of muscle stiffness and damping on simulated impact force peaks during running.

    PubMed

    Nigg, B M; Liu, W

    1999-08-01

    It has been frequently reported that vertical impact force peaks during running change only minimally when changing the midsole hardness of running shoes. However, the underlying mechanism for these experimental observations is not well understood. An athlete has various possibilities to influence external and internal forces during ground contact (e.g. landing velocity, geometrical alignment, muscle tuning, etc.). The purpose of this study was to discuss one possible strategy to influence external impact forces acting on the athlete's body during running, the strategy to change muscle activity (muscle tuning). The human body was modeled as a simplified mass-spring-damper system. The model included masses of the upper and the lower bodies with each part of the body represented by a rigid and a non-rigid wobbling mass. The influence of mechanical properties of the human body on the vertical impact force peak was examined by varying the spring constants and damping coefficients of the spring-damper units that connected the various masses. Two types of shoe soles were modeled using a non-linear force deformation model with two sets of parameters based on the force-deformation curves of pendulum impact experiments. The simulated results showed that the regulation of the mechanical coupling of rigid and wobbling masses of the human body had an influence on the magnitude of the vertical impact force, but not on its loading rate. It was possible to produce the same impact force peaks altering specific mechanical properties of the system for a soft and a hard shoe sole. This regulation can be achieved through changes of joint angles, changes in joint angular velocities and/or changes in muscle activation levels in the lower extremity. Therefore, it has been concluded that changes in muscle activity (muscle tuning) can be used as a possible strategy to affect vertical impact force peaks during running.

  20. ALICE HLT Run 2 performance overview.

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  1. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Updates to watershed modeling in the Potholes Reservoir basin, Washington-a supplement to Scientific Investigation Report 2009-5081

    USGS Publications Warehouse

    Mastin, Mark

    2012-01-01

    A previous collaborative effort between the U.S. Geological Survey and the Bureau of Reclamation resulted in a watershed model for four watersheds that discharge into Potholes Reservoir, Washington. Since the model was constructed, two new meteorological sites have been established that provide more reliable real-time information. The Bureau of Reclamation was interested in incorporating this new information into the existing watershed model developed in 2009, and adding measured snowpack information to update simulated results and to improve forecasts of runoff. This report includes descriptions of procedures to aid a user in making model runs, including a description of the Object User Interface for the watershed model with details on specific keystrokes to generate model runs for the contributing basins. A new real-time, data-gathering computer program automates the creation of the model input files and includes the new meteorological sites. The 2009 watershed model was updated with the new sites and validated by comparing simulated results to measured data. As in the previous study, the updated model (2012 model) does a poor job of simulating individual storms, but a reasonably good job of simulating seasonal runoff volumes. At three streamflow-gaging stations, the January 1 to June 30 retrospective forecasts of runoff volume for years 2010 and 2011 were within 40 percent of the measured runoff volume for five of the six comparisons, ranging from -39.4 to 60.3 percent difference. A procedure for collecting measured snowpack data and using the data in the watershed model for forecast model runs, based on the Ensemble Streamflow Prediction method, is described, with an example that uses 2004 snow-survey data.

  3. Simulation environment and graphical visualization environment: a COPD use-case

    PubMed Central

    2014-01-01

    Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327

  4. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  5. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  6. Comparison of physiological and acid-base balance response during uphill, level and downhill running performed at constant velocity.

    PubMed

    Maciejczyk, Marcin; Więcek, M; Szymura, J; Szyguła, Z

    2013-09-01

    The purpose of this study was to compare the physiological and the acid-base balance response to running at various slope angles. Ten healthy men 22.3 ± 1.56 years old participated in the study. The study consisted of completing the graded test until exhaustion and three 45-minute runs. For the first 30 minutes, runs were performed with an intensity of approximately 50% VO2max, while in the final 15 minutes the slope angle of treadmill was adjusted (0°; +4.5°; -4.5°), and a fixed velocity of running was maintained. During concentric exercise, a significant increase in the levels of physiological indicators was reported; during eccentric exercise, a significant decrease in the level of the analyzed indicators was observed. Level running did not cause significant changes in the indicators of acid-base balance. The indicators of acid-base balance changed significantly in the case of concentric muscle work (in comparison to level running) and after the eccentric work, significant and beneficial changes were observed in most of the biochemical indicators. The downhill run can be used for a partial regeneration of the body during exercise, because during this kind of effort an improvement of running economy was observed, and this type of effort did not impair the acid-base balance of body.

  7. A Multi-Season Study of the Effects of MODIS Sea-Surface Temperatures on Operational WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.

    2008-01-01

    Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.

  8. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  9. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  10. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  11. A Model-Based Expert System for Space Power Distribution Diagnostics

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Schlegelmilch, Richard F.

    1994-01-01

    When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.

  12. RCK: accurate and efficient inference of sequence- and structure-based protein-RNA binding models from RNAcompete data.

    PubMed

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-06-15

    Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    NASA Technical Reports Server (NTRS)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  14. A preliminary study of a running speed based heart rate prediction during an incremental treadmill exercise.

    PubMed

    Dae-Geun Jang; Byung-Hoon Ko; Sub Sunoo; Sang-Seok Nam; Hun-Young Park; Sang-Kon Bae

    2016-08-01

    This preliminary study investigates feasibility of a running speed based heart rate (HR) prediction. It is basically motivated from the assumption that there is a significant relationship between HR and the running speed. In order to verify the assumption, HR and running speed data from 217 subjects of varying aerobic capabilities were simultaneously collected during an incremental treadmill exercise. A running speed was defined as a treadmill speed and its corresponding heart rate was calculated by averaging the last one minute HR values of each session. The feasibility was investigated by assessing a correlation between the heart rate and the running speed using inter-subject (between-subject) and intra-subject (within-subject) datasets with regression orders of 1, 2, 3, and 4, respectively. Furthermore, HR differences between actual and predicted HRs were also employed to investigate the feasibility of the running speed in predicting heart rate. In the inter-subject analysis, a strong positive correlation and a reasonable HR difference (r = 0.866, 16.55±11.24 bpm @ 1st order; r = 0.871, 15.93±11.49 bpm @ 2nd order; r = 0.897, 13.98±10.80 bpm @ 3rd order; and r = 0.899, 13.93±10.64 bpm @ 4th order) were obtained, and a very high positive correlation and a very low HR difference (r = 0.978, 6.46±3.89 bpm @ 1st order; r = 0.987, 5.14±2.87 bpm @ 2nd order; r = 0.996, 2.61±2.03 bpm @ 3rd order; and r = 0.997, 2.04±1.73 bpm @ 4th order) were obtained in the intra-subject analysis. It can therefore be concluded that 1) heart rate is highly correlated with a running speed; 2) heart rate can be approximately estimated by a running speed with a proper statistical model (e.g., 3rd-order regression); and 3) an individual HR-speed calibration process may improve the prediction accuracy.

  15. 78 FR 61946 - Pheasant Run Wind, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2461-000] Pheasant Run Wind, LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket... Run Wind, LLC's application for market-based rate authority, with an accompanying rate schedule...

  16. A framework for process-based assessment of regional climate model experiments: applied to projections of southern African precipitation

    NASA Astrophysics Data System (ADS)

    James, Rachel; Washington, Richard; Jones, Richard

    2015-04-01

    There is a demand from adaptation planners for regional climate change projections, particularly the finer resolution data delivered by regional models. However, climate models are subject to important uncertainties, and their projections diverge substantially, particularly for precipitation. So how should decision makers know which futures to consider and which to disregard? Model evaluation is clearly a priority. The majority of studies seeking to assess the validity of projections are based on comparison of the models' twentieth century climatologies with observations or reanalysis. Whilst this work is very important, examination of the modelled mean state it is not sufficient to assess the credibility of modelled changes. Direct investigation of the mechanisms for change is also vital. In this study, a framework for process-based analysis of projections is presented, whereby circulation changes accompanying future responses are examined, and then compared to atmospheric dynamics during historical years in models and reanalyses. This framework has previously been applied to investigate a drying signal in West Africa, and will here be used to examine projected precipitation change in southern Africa. An ensemble of five global and regional model experiments will be employed, consisting of five perturbed versions of HadCM3 and five corresponding runs of HadRM3P (PRECIS), run over the CORDEX Africa domain. The global and regional model runs show contrasting future responses: there is a strong drying in the global models over southern Africa during the rainy season, but the regional models show drying over Madagascar and the south west Indian Ocean. Circulation changes associated with these projections will be presented as a first step towards understanding the mechanisms for change and the reasons for difference between the global and regional models. The interannual variability will also be examined and compared to reanalysis to explore how well the models represent the dipole between southern Africa and Madagascar in the twentieth century simulations. This analysis could shed light on the credibility of the projected changes, and the relative trustworthiness of the global and regional models. This research makes a valuable contribution to the understanding of mechanisms for change in southern Africa. It also has wider relevance for regional climate model studies, in highlighting the need to evaluate models on a case by case basis, and providing a framework for assessment which could be applied to other models and other regions.

  17. An object-oriented, coprocessor-accelerated model for ice sheet simulations

    NASA Astrophysics Data System (ADS)

    Seddik, H.; Greve, R.

    2013-12-01

    Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element code Sainou is introduced. Sainou is an Elmer fork which is reimplemented in Objective C and used for experimenting with ice sheet models running on coprocessors, essentially GPU devices. GPUs are highly parallel processors that provide opportunities for fine-grained parallelization of the full Stokes problem using the standard OpenCL language (http://www.khronos.org/opencl/) to access the device. Sainou is built upon a collection of Objective C base classes that service a modular kernel (itself a base class) which provides the core methods to solve the finite element problem. An early implementation of Sainou will be presented with emphasis on the object architecture and the strategies of parallelizations. The computation of a simple heat conduction problem is used to test the implementation which also provides experimental support for running the global matrix assembly on GPU.

  18. Sterile neutrino searches via displaced vertices at LHCb

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Cazzato, Eros; Fischer, Oliver

    2017-11-01

    We explore the sensitivity of displaced vertex searches at LHCb for testing sterile neutrino extensions of the Standard Model towards explaining the observed neutrino masses. We derive estimates for the constraints on sterile neutrino parameters from a recently published displaced vertex search at LHCb based on run 1 data. They yield the currently most stringent limit on active-sterile neutrino mixing in the sterile neutrino mass range between 4.5 GeV and 10 GeV. Furthermore, we present forecasts for the sensitivities that could be obtained from the run 2 data and also for the high-luminosity phase of the LHC.

  19. Automatic violence detection in digital movies

    NASA Astrophysics Data System (ADS)

    Fischer, Stephan

    1996-11-01

    Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.

  20. Definition of run-off-road crash clusters-For safety benefit estimation and driver assistance development.

    PubMed

    Nilsson, Daniel; Lindman, Magdalena; Victor, Trent; Dozza, Marco

    2018-04-01

    Single-vehicle run-off-road crashes are a major traffic safety concern, as they are associated with a high proportion of fatal outcomes. In addressing run-off-road crashes, the development and evaluation of advanced driver assistance systems requires test scenarios that are representative of the variability found in real-world crashes. We apply hierarchical agglomerative cluster analysis to define similarities in a set of crash data variables, these clusters can then be used as the basis in test scenario development. Out of 13 clusters, nine test scenarios are derived, corresponding to crashes characterised by: drivers drifting off the road in daytime and night-time, high speed departures, high-angle departures on narrow roads, highways, snowy roads, loss-of-control on wet roadways, sharp curves, and high speeds on roads with severe road surface conditions. In addition, each cluster was analysed with respect to crash variables related to the crash cause and reason for the unintended lane departure. The study shows that cluster analysis of representative data provides a statistically based method to identify relevant properties for run-off-road test scenarios. This was done to support development of vehicle-based run-off-road countermeasures and driver behaviour models used in virtual testing. Future studies should use driver behaviour from naturalistic driving data to further define how test-scenarios and behavioural causation mechanisms should be included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Pipeline Processing with an Iterative, Context-based Detection Model

    DTIC Science & Technology

    2014-04-19

    stripping the incoming data stream of repeating and irrelevant signals prior to running primary detectors , adaptive beamforming and matched field processing...framework, pattern detectors , correlation detectors , subspace detectors , matched field detectors , nuclear explosion monitoring 16. SECURITY CLASSIFICATION...10 5. Teleseismic paths from earthquakes in

  2. Did recent world record marathon runners employ optimal pacing strategies?

    PubMed

    Angus, Simon D

    2014-01-01

    We apply statistical analysis of high frequency (1 km) split data for the most recent two world-record marathon runs: Run 1 (2:03:59, 28 September 2008) and Run 2 (2:03:38, 25 September 2011). Based on studies in the endurance cycling literature, we develop two principles to approximate 'optimal' pacing in the field marathon. By utilising GPS and weather data, we test, and then de-trend, for each athlete's field response to gradient and headwind on course, recovering standardised proxies for power-based pacing traces. The resultant traces were analysed to ascertain if either runner followed optimal pacing principles; and characterise any deviations from optimality. Whereas gradient was insignificant, headwind was a significant factor in running speed variability for both runners, with Runner 2 targeting the (optimal) parallel variation principle, whilst Runner 1 did not. After adjusting for these responses, neither runner followed the (optimal) 'even' power pacing principle, with Runner 2's macro-pacing strategy fitting a sinusoidal oscillator with exponentially expanding envelope whilst Runner 1 followed a U-shaped, quadratic form. The study suggests that: (a) better pacing strategy could provide elite marathon runners with an economical pathway to significant performance improvements at world-record level; and (b) the data and analysis herein is consistent with a complex-adaptive model of power regulation.

  3. The Run-Up of Subduction Zones

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F. J.; Fuentes, M.; Matias, M.; Medina, M.

    2016-12-01

    Large earthquakes in subduction zones are liable to produce tsunamis that can cause destruction and fatalities. The Run-up is a geophysical parameter that quantifies damage and if critical facilities or population are exposed to. Here we use the coupling for certain subduction regions measured by different techniques (Potency and GPS observations) to define areas where large earthquakes can occur. Taking the slab 1.0 from the United States Geological Survey (USGS), we can define the geometry of the area including its tsunamigenic potential. By using stochastic earthquakes sources for each area with its maximum tsunamigenic potential, we calculate the numerical and analytical run-up for each case. Then, we perform a statistical analysis and calculate the envelope for both methods. Furthermore, we build an index of risk using: the closest slope to the shore in a piecewise linear approach (last slopecriteria) and the outputsfrom tsunami modeling. Results show that there are areas prone to produce higher run-up than others based on the size of the earthquake, geometrical constraints of the source, tectonic setting and the coast last slope. Based on these results, there are zones that have low risk index which can define escape routes or secure coastal areas for tsunami early warning, urban and planning purposes when detailed data is available.

  4. Cost characteristics of hospitals.

    PubMed

    Smet, Mike

    2002-09-01

    Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.

  5. NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Paxson, Daniel E.

    2014-01-01

    The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.

  6. Catching fly balls in virtual reality: a critical test of the outfielder problem

    PubMed Central

    Fink, Philip W.; Foo, Patrick S.; Warren, William H.

    2013-01-01

    How does a baseball outfielder know where to run to catch a fly ball? The “outfielder problem” remains unresolved, and its solution would provide a window into the visual control of action. It may seem obvious that human action is based on an internal model of the physical world, such that the fielder predicts the landing point based on a mental model of the ball’s trajectory (TP). But two alternative theories, Optical Acceleration Cancellation (OAC) and Linear Optical Trajectory (LOT), propose that fielders are led to the right place at the right time by coupling their movements to visual information in a continuous “online” manner. All three theories predict successful catches and similar running paths. We provide a critical test by using virtual reality to perturb the vertical motion of the ball in mid-flight. The results confirm the predictions of OAC, but are at odds with LOT and TP. PMID:20055547

  7. A cloud, precipitation and electrification modeling effort for COHMEX

    NASA Technical Reports Server (NTRS)

    Orville, Harold D.; Helsdon, John H.; Farley, Richard D.

    1991-01-01

    In mid-1987, the Modeling Group of the Institute of Atmospheric Sciences (IAS) began to simulate and analyze cloud runs that were made during the Cooperative Huntsville Meteorological Experiment (COHMEX) Project and later. The cloud model was run nearly every day during the summer 1986 COHMEX Project. The Modeling Group was then funded to analyze the results, make further modeling tests, and help explain the precipitation processes in the Southeastern United States. The main science objectives of COHMEX were: (1) to observe the prestorm environment and understand the physical mechanisms leading to the formation of small convective systems and processes controlling the production of precipitation; (2) to describe the structure of small convective systems producing precipitation including the large and small scale events in the environment surrounding the developing and mature convective system; (3) to understand the interrelationships between electrical activity within the convective system and the process of precipitation; and (4) to develop and test numerical models describing the boundary layer, tropospheric, and cloud scale thermodynamics and dynamics associated with small convective systems. The latter three of these objectives were addressed by the modeling activities of the IAS. A series of cloud modes were used to simulate the clouds that formed during the operational project. The primary models used to date on the project were a two dimensional bulk water model, a two dimensional electrical model, and to a lesser extent, a two dimensional detailed microphysical cloud model. All of the models are based on fully interacting microphysics, dynamics, thermodynamics, and electrical equations. Only the 20 July 1986 case was analyzed in detail, although all of the cases run during the summer were analyzed as to how well they did in predicting the characteristics of the convection for that day.

  8. The Effectiveness of a 6-Week Intervention Program Aimed at Modifying Running Style in Patients With Chronic Exertional Compartment Syndrome: Results From a Series of Case Studies.

    PubMed

    Helmhout, Pieter H; Diebal, Angela R; van der Kaaden, Lisanne; Harts, Chris C; Beutler, Anthony; Zimmermann, Wes O

    2015-03-01

    Previous studies have reported on the promising effects of changing running style in patients with chronic exertional compartment syndrome (CECS) using a 6-week training program aimed at adopting a forefoot strike technique. This study expands that work by comparing a 6-week in-house, center-based run training program with a less extensive, supervised, home-based run training program (50% home training). An alteration in running technique will lead to improvements in CECS complaints and running performance, with the less supervised program producing less dramatic results. Cohort study; Level of evidence, 3. Nineteen patients with CECS were prospectively enrolled. Postrunning intracompartmental pressure (ICP), run performance, and self-reported questionnaires were taken for all patients at baseline and after 6 weeks of running intervention. Questionnaires were also taken from 14 patients (7 center-based, 6 home-based) 4 months posttreatment. Significant improvement between preintervention and postintervention rates was found for running distance (43%), ICP values (36%), and scores on the questionnaires Single Assessment Numeric Evaluation (SANE; 36%), Lower Leg Outcome Survey (LLOS; 18%), and Patient Specific Complaints (PSC; 60%). The mean posttreatment score on the Global Rating of Change (GROC) was between +4 and +5 ("somewhat better" to "moderately better"). In 14 participants (74%), no elevation of pain was reported posttreatment, compared with 3 participants (16%) at baseline; in all these cases, the running test was aborted because of a lack of cardiorespiratory fitness. Self-reported scores continued to improve 4 months after the end of the intervention program, with mean improvement rates of 48% (SANE), 26% (LLOS), and 81% (PSC). The mean GROC score improved to +6 points ("a great deal better"). In 19 patients diagnosed with CECS, a 6-week forefoot running intervention performed in both a center-based and home-based training setting led to decreased postrunning lower leg ICP values, improved running performances, and self-assessed leg condition. The influence of training group was not statistically significant. Overall, this is a promising finding, taking into consideration the significantly reduced investments in time and resources needed for the home-based program.

  9. Communications/Navigation Outage Forecasting System (C/NOFS)

    DTIC Science & Technology

    2010-02-21

    al., 2002]. They are also lower than values predicted by the International Reference Iono- sphere ( IRI ) model [Gulyaeva and Titheridge, 2006] run for...based on the IRI model or other observations. At present no mechanism has been proposed which accounts for the basic formation of BPDs or their...funding by the DMSP program office. We thank J. Retterer for the IRI model results. This research was supported by Air Force Office of Scientific Research

  10. The Influence of Atmosphere-Ocean Interaction on MJO Development and Propagation

    DTIC Science & Technology

    2014-09-30

    evaluate modeling results and process studies. The field phase of this project is associated with DYNAMO , which is the US contribution to the...influence on ocean temperature 4. Extended run for DYNAMO with high vertical resolution NCOM RESULTS Summary of project results The work funded...model experiments of the November 2011 MJO – the strongest MJO episode observed during the DYNAMO . The previous conceptual model that was based on TOGA

  11. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.

  12. Drivers' communicative interactions: on-road observations and modelling for integration in future automation systems.

    PubMed

    Portouli, Evangelia; Nathanael, Dimitris; Marmaras, Nicolas

    2014-01-01

    Social interactions with other road users are an essential component of the driving activity and may prove critical in view of future automation systems; still up to now they have received only limited attention in the scientific literature. In this paper, it is argued that drivers base their anticipations about the traffic scene to a large extent on observations of social behaviour of other 'animate human-vehicles'. It is further argued that in cases of uncertainty, drivers seek to establish a mutual situational awareness through deliberate communicative interactions. A linguistic model is proposed for modelling these communicative interactions. Empirical evidence from on-road observations and analysis of concurrent running commentary by 25 experienced drivers support the proposed model. It is suggested that the integration of a social interactions layer based on illocutionary acts in future driving support and automation systems will improve their performance towards matching human driver's expectations. Practitioner Summary: Interactions between drivers on the road may play a significant role in traffic coordination. On-road observations and running commentaries are presented as empirical evidence to support a model of such interactions; incorporation of drivers' interactions in future driving support and automation systems may improve their performance towards matching driver's expectations.

  13. Modulation of Soil Initial State on WRF Model Performance Over China

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun

    2017-11-01

    The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.

  14. Numerical modelling of rapid, flow-like landslides across 3-D terrains: a Tsunami Squares approach to El Picacho landslide, El Salvador, September 19, 1982

    NASA Astrophysics Data System (ADS)

    Wang, Jiajia; Ward, Steven N.; Xiao, Lili

    2015-06-01

    Flow-like landslides are rapidly moving fluid-solid mixtures that can cause significant destruction along paths that run far from their original sources. Existing models for run out prediction and motion simulation of flow-like landslides have many limitations. In this paper, we develop a new method named `Tsunami Squares' to simulate the generation, propagation and stoppage of flow-like landslides based on conservation of volume and momentum. Landslide materials in the new method form divisible squares that are displaced, then further fractured. The squares move under the influence of gravity-driven acceleration and suffer decelerations due to basal and dynamic frictions. Distinctively, this method takes into account solid and fluid mechanics, particle interactions and flow regime transitions. We apply this approach to simulate the 1982 El Picacho landslide in San Salvador, capital city of El Salvador. Landslide products from Tsunami Squares such as run out distance, velocities, erosion and deposition depths and impacted area agree well with field investigated and eyewitness data.

  15. New NASA 3D Animation Shows Seven Days of Simulated Earth Weather

    NASA Image and Video Library

    2014-08-11

    This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Assessment of Global Forecast Ocean Assimilation Model (FOAM) using new satellite SST data

    NASA Astrophysics Data System (ADS)

    Ascione Kenov, Isabella; Sykes, Peter; Fiedler, Emma; McConnell, Niall; Ryan, Andrew; Maksymczuk, Jan

    2016-04-01

    There is an increased demand for accurate ocean weather information for applications in the field of marine safety and navigation, water quality, offshore commercial operations, monitoring of oil spills and pollutants, among others. The Met Office, UK, provides ocean forecasts to customers from governmental, commercial and ecological sectors using the Global Forecast Ocean Assimilation Model (FOAM), an operational modelling system which covers the global ocean and runs daily, using the NEMO (Nucleus for European Modelling of the Ocean) ocean model with horizontal resolution of 1/4° and 75 vertical levels. The system assimilates salinity and temperature profiles, sea surface temperature (SST), sea surface height (SSH), and sea ice concentration observations on a daily basis. In this study, the FOAM system is updated to assimilate Advanced Microwave Scanning Radiometer 2 (AMSR2) and the Spinning Enhanced Visible and Infrared Imager (SEVIRI) SST data. Model results from one month trials are assessed against observations using verification tools which provide a quantitative description of model performance and error, based on statistical metrics, including mean error, root mean square error (RMSE), correlation coefficient, and Taylor diagrams. A series of hindcast experiments is used to run the FOAM system with AMSR2 and SEVIRI SST data, using a control run for comparison. Results show that all trials perform well on the global ocean and that largest SST mean errors were found in the Southern hemisphere. The geographic distribution of the model error for SST and temperature profiles are discussed using statistical metrics evaluated over sub-regions of the global ocean.

  17. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  18. Extraordinary flood response of a small urban watershed to short-duration convective rainfall

    USGS Publications Warehouse

    Smith, J.A.; Miller, A.J.; Baeck, M.L.; Nelson, P.A.; Fisher, G.T.; Meierdiercks, K.L.

    2005-01-01

    The 9.1 km2 Moores Run watershed in Baltimore, Maryland, experiences floods with unit discharge peaks exceeding 1 m3 s-1 km-2 12 times yr-1, on average. Few, if any, drainage basins in the continental United States have a higher frequency. A thunderstorm system on 13 June 2003 produced the record flood peak (13.2 m3 s-1 km-2) during the 6-yr stream gauging record of Moores Run. In this paper, the hydrometeorology, hydrology, and hydraulics of extreme floods in Moores Run are examined through analyses of the 13 June 2003 storm and flood, as well as other major storm and flood events during the 2000-03 time period. The 13 June 2003 flood, like most floods in Moores Run, was produced by an organized system of thunderstorms. Analyses of the 13 June 2003 storm, which are based on volume scan reflectivity observations from the Sterling, Virginia, WSR-88D radar, are used to characterize the spatial and temporal variability of flash flood producing rainfall. Hydrology of flood response in Moores Run is characterized by highly efficient concentration of runoff through the storm drain network and relatively low runoff ratios. A detailed survey of high-water marks for the 13 June 2003 flood is used, in combination with analyses based on a 2D, depth-averaged open channel flow model (TELEMAC 2D) to examine hydraulics of the 13 June 2003 flood. Hydraulic analyses are used to examine peak discharge estimates for the 13 June flood peak, propagation of flood waves in the Moores Run channel, and 2D flow features associated with channel and floodplain geometry. ?? 2005 American Meteorological Society.

  19. Acute effect of different minimalist shoes on foot strike pattern and kinematics in rearfoot strikers during running.

    PubMed

    Squadrone, Roberto; Rodano, Renato; Hamill, Joseph; Preatoni, Ezio

    2015-01-01

    Despite the growing interest in minimalist shoes, no studies have compared the efficacy of different types of minimalist shoe models in reproducing barefoot running patterns and in eliciting biomechanical changes that make them differ from standard cushioned running shoes. The aim of this study was to investigate the acute effects of different footwear models, marketed as "minimalist" by their manufacturer, on running biomechanics. Six running shoes marketed as barefoot/minimalist models, a standard cushioned shoe and the barefoot condition were tested. Foot-/shoe-ground pressure and three-dimensional lower limb kinematics were measured in experienced rearfoot strike runners while they were running at 3.33 m · s⁻¹ on an instrumented treadmill. Physical and mechanical characteristics of shoes (mass, heel and forefoot sole thickness, shock absorption and flexibility) were measured with laboratory tests. There were significant changes in foot strike pattern (described by the strike index and foot contact angle) and spatio-temporal stride characteristics, whereas only some among the other selected kinematic parameters (i.e. knee angles and hip vertical displacement) changed accordingly. Different types of minimalist footwear models induced different changes. It appears that minimalist footwear with lower heel heights and minimal shock absorption is more effective in replicating barefoot running.

  20. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    NASA Astrophysics Data System (ADS)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  1. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  2. Wind tunnel investigation of helicopter rotor wake effects on three helicopter fuselage models

    NASA Technical Reports Server (NTRS)

    Wilson, J. C.; Mineck, R. E.

    1974-01-01

    The effects of rotor downwash on helicopter fuselage aerodynamic characteristics were investigated. A rotor model for generating the downwash was mounted close to each of three fuselage models. The main report presents the force and moment data in both graphical and tabular form and the pressure data in graphical form. This supplement presents the pressure data in tabular form. Each run or parameter sweep is identified by a unique run number. The data points in each run are identified by a point number. The pressure data can be matched to the force data by matching the run and point number.

  3. Recent advances in lunar base simulation

    NASA Astrophysics Data System (ADS)

    Johenning, B.; Koelle, H. H.

    This article reports about the results of the latest computer runs of a lunar base simulation model. The lunar base consists of 20 facilities for lunar mining, processing and fabrication. The infrastructure includes solar and nuclear power plants, a central workshop, habitat and farm. Lunar products can be used for construction of solar power systems (SPS) or other spacecraft at several space locations. The simulation model evaluates the mass, energy and manpower flows between the elements of the system as well as system cost and cost of products on an annual basis for a given operational period. The 1983 standard model run over a fifty-years life cycle (beginning about the year 2000) was accomplished for a mean annual production volume of 78 180 Mg of hardware products for export resulting in average specific manufacturing cost of 8.4 $/kg and total annual cost of 1.25 billion dollars during the life cycle. The reference space transportation system uses LOX/LH 2 propulsion for which at the average 210 500 Mg LOX per year is produced on the moon. The sensitivity analysis indicates the importance of bootstrapping as well as the influence of market size, space transportation cost and specific resources demand on the mean lunar manufacturing cost. The option using lunar resources turns out to be quite attractive from the economical viewpoint. Systems analysis by this lunar base model and further trade-offs will be a useful tool to confirm this.

  4. An Evidence-Based Videotaped Running Biomechanics Analysis.

    PubMed

    Souza, Richard B

    2016-02-01

    Running biomechanics play an important role in the development of injuries. Performing a running biomechanics analysis on injured runners can help to develop treatment strategies. This article provides a framework for a systematic video-based running biomechanics analysis plan based on the current evidence on running injuries, using 2-dimensional (2D) video and readily available tools. Fourteen measurements are proposed in this analysis plan from lateral and posterior video. Identifying simple 2D surrogates for 3D biomechanic variables of interest allows for widespread translation of best practices, and have the best opportunity to impact the highly prevalent problem of the injured runner. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Investigation of Small-Caliber Primer Function Using a Multiphase Computational Model

    DTIC Science & Technology

    2008-07-01

    all solid walls along with specified inflow at the primer orifice (0.102 cm < Y < 0.102 cm at X = 0). Initially , the entire flowfield is filled...to explicitly treat both the gas and solid phase. The model is based on the One Dimensional Turbulence modeling approach that has recently emerged as...a powerful tool in multiphase simulations. Initial results are shown for the model run as a stand-alone code and are compared to recent experiments

  6. Verification of Global Assimilation of Ionospheric Measurements Gauss Markov (GAIM-GM) Model Forecast Accuracy

    DTIC Science & Technology

    2011-09-01

    m b e r o f O cc u rr e n ce s 50 ( a ) Kp 0-3 (b) Kp 4-9 Figure 25. Scatter plot of...dependent physics based model that uses the Ionospheric Forecast Model ( IFM ) as a background model upon which perturbations are imposed via a Kalman filter...vertical output resolution as the IFM . GAIM-GM can also be run in a regional mode with a finer resolution (Scherliess et al., 2006). GAIM-GM is

  7. Graphs and Tracks Revisited

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang; Belloni, Mario

    2013-04-01

    We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.

  8. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  9. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  11. A statistical model of the human core-temperature circadian rhythm

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Choe, Y.; Luithardt, H.; Czeisler, C. A.

    2000-01-01

    We formulate a statistical model of the human core-temperature circadian rhythm in which the circadian signal is modeled as a van der Pol oscillator, the thermoregulatory response is represented as a first-order autoregressive process, and the evoked effect of activity is modeled with a function specific for each circadian protocol. The new model directly links differential equation-based simulation models and harmonic regression analysis methods and permits statistical analysis of both static and dynamical properties of the circadian pacemaker from experimental data. We estimate the model parameters by using numerically efficient maximum likelihood algorithms and analyze human core-temperature data from forced desynchrony, free-run, and constant-routine protocols. By representing explicitly the dynamical effects of ambient light input to the human circadian pacemaker, the new model can estimate with high precision the correct intrinsic period of this oscillator ( approximately 24 h) from both free-run and forced desynchrony studies. Although the van der Pol model approximates well the dynamical features of the circadian pacemaker, the optimal dynamical model of the human biological clock may have a harmonic structure different from that of the van der Pol oscillator.

  12. 78 FR 61946 - Pheasant Run Wind II, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2462-000] Pheasant Run Wind II, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Run Wind II, LLC's application for market-based rate authority, with an accompanying rate schedule...

  13. Reduced Order Modeling for Rapid Simulation of Blast Events of a Military Ground Vehicle and Its Occupants

    DTIC Science & Technology

    2014-08-12

    deformation of the hull due to blast . An LS- Dyna model which included the ConWep function [6] was run, with a charge mass corresponding to a STANAG...different models and blast loadings are shown in Table 3. These responses are based on generic seat properties and assumed dummy position, which can be...Comparison between MADYMO and LS- Dyna models An LS- Dyna model with ConWep blast force applied to all segments of the hull floor and a MADYMO model with

  14. Do climate model predictions agree with long-term precipitation trends in the arid southwestern United States?

    NASA Astrophysics Data System (ADS)

    Elias, E.; Rango, A.; James, D.; Maxwell, C.; Anderson, J.; Abatzoglou, J. T.

    2016-12-01

    Researchers evaluating climate projections across southwestern North America observed a decreasing precipitation trend. Aridification was most pronounced in the cold (non-monsoonal) season, whereas downward trends in precipitation were smaller in the warm (monsoonal) season. In this region, based upon a multimodel mean of 20 Coupled Model Intercomparison Project 5 models using a business-as-usual (Representative Concentration Pathway 8.5) trajectory, midcentury precipitation is projected to increase slightly during the monsoonal time period (July-September; 6%) and decrease slightly during the remainder of the year (October-June; -4%). We use observed long-term (1915-2015) monthly precipitation records from 16 weather stations to investigate how well measured trends corroborate climate model predictions during the monsoonal and non-monsoonal timeframe. Running trend analysis using the Mann-Kendall test for 15 to 101 year moving windows reveals that half the stations showed significant (p≤0.1), albeit small, increasing trends based on the longest term record. Trends based on shorter-term records reveal a period of significant precipitation decline at all stations representing the 1950s drought. Trends from 1930 to 2015 reveal significant annual, monsoonal and non-monsoonal increases in precipitation (Fig 1). The 1960 to 2015 time window shows no significant precipitation trends. The more recent time window (1980 to 2015) shows a slight, but not significant, increase in monsoonal precipitation and a larger, significant decline in non-monsoonal precipitation. GCM precipitation projections are consistent with more recent trends for the region. Running trends from the most recent time window (mid-1990s to 2015) at all stations show increasing monsoonal precipitation and decreasing Oct-Jun precipitation, with significant trends at 6 of 16 stations. Running trend analysis revealed that the long-term trends were not persistent throughout the series length, but depended on the period examined. Recent trends in Southwest precipitation are directionally consistent with anthropogenic climate change.

  15. [The virtual reality simulation research of China Mechanical Virtual Human based on the Creator/Vega].

    PubMed

    Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng

    2010-10-01

    The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.

  16. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  17. Model-based calculations of surface mass balance of mountain glaciers for the purpose of water consumption planning: focus on Djankuat Glacier (Central Caucasus)

    NASA Astrophysics Data System (ADS)

    Rybak, O. O.; Rybak, E. A.

    2018-01-01

    Mountain glaciers act as regulators of run-off in the summer period, which is very crucial for economy especially in dynamically developing regions with rapidly growing population, such as Central Asia or the Northern Caucasus in Russia. In overall, glaciers stabilize water consumption in comparatively arid areas and provide conditions for sustainable development of the economy in mountainous regions and in the surrounding territories. A proper prediction of the glacial run-off is required to elaborate strategies of the regional development. This goal can be achieved by implementation of mathematical modeling methods into planning methodologies. In the paper, we consider one of the first steps in glacier dynamical modeling - surface mass balance simulation. We focus on the Djankuat Glacier in the Central Caucasus, where regular observations have been conducted during the last fifty years providing an exceptional opportunity to calibrate and to validate a mathematical model.

  18. Pairwise velocities in the "Running FLRW" cosmological model

    NASA Astrophysics Data System (ADS)

    Bibiano, Antonio; Croton, Darren J.

    2017-05-01

    We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the 'Running Friedmann-Lemaître-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Λ cold dark matter (CDM) with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter, a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various coupled dark energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM that could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.

  19. Quality of workplace social relationships and perceived health.

    PubMed

    Rydstedt, Leif W; Head, Jenny; Stansfeld, Stephen A; Woodley-Jones, Davina

    2012-06-01

    Associations between the quality of social relationships at work and mental and self-reported health were examined to assess whether these associations were independent of job strain. The study was based on cross-sectional survey data from 728 employees (response rate 58%) and included the Demand-Control-(Support) (DC-S) model, six items on the quality of social relationships at the workplace, the General Health Questionnaire (30), and an item on self-reported physical health. Logistic regression analyses were used. A first set of models were run with adjustment for age, sex, and socioeconomic group. A second set of models were run adjusted for the dimensions of the DC-S model. Positive associations were found between the quality of social relationships and mental health as well as self-rated physical health, and these associations remained significant even after adjustment for the dimensions. The findings add support to the Health and Safety Executive stress management standards on social relationships at the workplace.

  20. Actual situation analyses of rat-run traffic on community streets based on car probe data

    NASA Astrophysics Data System (ADS)

    Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao

    2017-10-01

    Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.

  1. Development of a railway wagon-track interaction model: Case studies on excited tracks

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Xianmai; Li, Xuwei; He, Xianglin

    2018-02-01

    In this paper, a theoretical framework for modeling the railway wagon-ballast track interactions is presented, in which the dynamic equations of motion of wagon-track systems are constructed by effectively coupling the linear and nonlinear dynamic characteristics of system components. For the linear components, the energy-variational principle is directly used to derive their dynamic matrices, while for the nonlinear components, the dynamic equilibrium method is implemented to deduce the load vectors, based on which a novel railway wagon-ballast track interaction model is developed, and being validated by comparing with the experimental data measured from a heavy haul railway and another advanced model. With this study, extensive contributions in figuring out the critical speed of instability, limits and localizations of track irregularities over derailment accidents are presented by effectively integrating the dynamic simulation model, the track irregularity probabilistic model and time-frequency analysis method. The proposed approaches can provide crucial information to guarantee the running safety and stability of the wagon-track system when considering track geometries and various running speeds.

  2. Agricultural Decision Support Through Robust Assimilation of Satellite Derived Soil Moisture Estimates

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2012-12-01

    Soil Moisture is a key component in the hydrological process, affects surface and boundary layer energy fluxes and is the driving factor in agricultural production. Multiple in situ soil moisture measuring instruments such as Time-domain Reflectrometry (TDR), Nuclear Probes etc. are in use along with remote sensing methods like Active and Passive Microwave (PM) sensors. In situ measurements, despite being more accurate, can only be obtained at discrete points over small spatial scales. Remote sensing estimates, on the other hand, can be obtained over larger spatial domains with varying spatial and temporal resolutions. Soil moisture profiles derived from satellite based thermal infrared (TIR) imagery can overcome many of the problems associated with laborious in-situ observations over large spatial domains. An area where soil moisture observation and assimilation is receiving increasing attention is agricultural crop modeling. This study revolves around the use of the Decision Support System for Agrotechnology Transfer (DSSAT) crop model to simulate corn yields under various forcing scenarios. First, the model was run and calibrated using observed precipitation and model generated soil moisture dynamics. Next, the modeled soil moisture was updated using estimates derived from satellite based TIR imagery and the Atmospheric Land Exchange Inverse (ALEXI) model. We selected three climatically different locations to test the concept. Test Locations were selected to represent varied climatology. Bell Mina, Alabama - South Eastern United States, representing humid subtropical climate. Nabb, Indiana - Mid Western United States, representing humid continental climate. Lubbok, Texas - Southern United States, representing semiarid steppe climate. A temporal (2000-2009) correlation analysis of the soil moisture values from both DSSAT and ALEXI were performed and validated against the Land Information System (LIS) soil moisture dataset. The results clearly show strong correlation (R = 73%) between ALEXI and DSSAT at Bell Mina. At Nabb and Lubbock the correlation was 50-60%. Further, multiple experiments were conducted for each location: a) a DSSAT rain-fed 10 year sequential run forced with daymet precipitation; b) a DSSAT sequential run with no precipitation data; and c) a DSSAT run forced with ALEXI soil moisture estimates alone. The preliminary results of all the experiments are quantified through soil moisture correlations and yield comparisons. In general, the preliminary results strongly suggest that DSSAT forced with ALEXI can provide significant information especially at locations where no significant precipitation data exists.

  3. Observation and Modeling of Clear Air Turbulence (CAT) over Europe

    NASA Astrophysics Data System (ADS)

    Sprenger, M.; Mayoraz, L.; Stauch, V.; Sharman, B.; Polymeris, J.

    2012-04-01

    CAT represents a very relevant phenomenon for aviation safety. It can lead to passenger injuries, causes an increase in fuel consumption and, under severe intensity, can involve structural damages to the aircraft. The physical processes causing CAT remain at present not fully understood. Moreover, because of its small scale, CAT cannot be represented in numerical weather prediction (NWP) models. In this study, the physical processes related to CAT and its representation in NWP models is further investigated. First, 134 CAT events over Europe are extracted from a flight monitoring data base (FDM), run by the SWISS airline and containing over 100'000 flights. The location, time, and meteorological parameters along the turbulent spots are analysed. Furthermore, the 7-km NWP model run by the Swiss National Weather Service (Meteoswiss) is used to calculate model-based CAT indices, e.g. Richardson number, Ellrod & Knapp turbulence index and a complex/combined CAT index developed at NCAR. The CAT indices simulated with COSMO-7 is then compared to the observed CAT spots, hence allowing to assess the model's performance, and potential use in a CAT warning system. In a second step, the meteorological conditions associated with CAT are investigated. To this aim, CAT events are defined as coherent structures in space and in time, i.e. their dimension and life cycle is studied, in connection with jet streams and upper-level fronts. Finally, in a third step the predictability of CAT is assessed, by comparing CAT index predictions based on different lead times of the NWP model COSMO-7

  4. A software-based sensor for combined sewer overflows.

    PubMed

    Leonhardt, G; Fach, S; Engelhard, C; Kinzel, H; Rauch, W

    2012-01-01

    A new methodology for online estimation of excess flow from combined sewer overflow (CSO) structures based on simulation models is presented. If sufficient flow and water level data from the sewer system is available, no rainfall data are needed to run the model. An inverse rainfall-runoff model was developed to simulate net rainfall based on flow and water level data. Excess flow at all CSO structures in a catchment can then be simulated with a rainfall-runoff model. The method is applied to a case study and results show that the inverse rainfall-runoff model can be used instead of missing rain gauges. Online operation is ensured by software providing an interface to the SCADA-system of the operator and controlling the model. A water quality model could be included to simulate also pollutant concentrations in the excess flow.

  5. The Met Office Coupled Atmosphere/Land/Ocean/Sea-Ice Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Mirouze, Isabelle; Martin, Matthew; Hines, Adrian; Guiavarch, Catherine; Shelly, Ann

    2014-05-01

    The Met Office has developed a weakly-coupled data assimilation (DA) system using the global coupled model HADGEM3 (Hadley Centre Global Environment Model, version 3). This model combines the atmospheric model UM (Unified Model) at 60 km horizontal resolution on 85 vertical levels, the ocean model NEMO (Nucleus for European Modeling of the Ocean) at 25 km (at the equator) horizontal resolution on 75 vertical levels, and the sea-ice model CICE at the same resolution as NEMO. The atmosphere and the ocean/sea-ice fields are coupled every 1-hour using the OASIS coupler. The coupled model is corrected using two separate 6-hour window data assimilation systems: a 4D-Var for the atmosphere with associated soil moisture content nudging and snow analysis schemes on the one hand, and a 3D-Var FGAT for the ocean and sea-ice on the other hand. The background information in the DA systems comes from a previous 6-hour forecast of the coupled model. To show the impact of coupled DA, one-month experiments have been carried out, including 1) a full atmosphere/land/ocean/sea-ice coupled DA run, 2) an atmosphere-only run forced by OSTIA SSTs and sea-ice with atmosphere and land DA, and 3) an ocean-only run forced by atmospheric fields from run 2 with ocean and sea-ice DA. In addition, 5-day forecast runs, started twice a day, have been produced from initial conditions generated by either run 1 or a combination of runs 2 and 3. The different results have been compared to each other and, whenever possible, to other references such as the Met Office atmosphere and ocean operational analyses or the OSTIA data. These all show the coupled DA system functioning well. Evidence of imbalances and initialisation shocks has also been looked for.

  6. A dynamic structural model of expanded RNA CAG repeats: A refined X-ray structure and computational investigations using molecular dynamics and umbrella sampling simulations

    PubMed Central

    Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.

    2013-01-01

    One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937

  7. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  8. Using Supervised Learning Techniques for Diagnosis of Dynamic Systems

    DTIC Science & Technology

    2002-05-04

    M. Gasca 2 , Juan A. Ortega2 Abstract. This paper describes an approach based on supervised diagnose systems faults are needed to maintain the systems...labelled, data will be used for this purpose [5] [6]. treated to add additional information about the running of system. In [7] the fundaments of the based ...8] proposes classification tool to the set of labelled and treated data. This a consistency- based approach with qualitative models. way, any

  9. On-line bolt-loosening detection method of key components of running trains using binocular vision

    NASA Astrophysics Data System (ADS)

    Xie, Yanxia; Sun, Junhua

    2017-11-01

    Bolt loosening, as one of hidden faults, affects the running quality of trains and even causes serious safety accidents. However, the developed fault detection approaches based on two-dimensional images cannot detect bolt-loosening due to lack of depth information. Therefore, we propose a novel online bolt-loosening detection method using binocular vision. Firstly, the target detection model based on convolutional neural network (CNN) is used to locate the target regions. And then, stereo matching and three-dimensional reconstruction are performed to detect bolt-loosening faults. The experimental results show that the looseness of multiple bolts can be characterized by the method simultaneously. The measurement repeatability and precision are less than 0.03mm, 0.09mm respectively, and its relative error is controlled within 1.09%.

  10. Wavelet-based identification of rotor blades in passage-through-resonance tests

    NASA Astrophysics Data System (ADS)

    Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano

    2018-01-01

    Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.

  11. NO PLIF Imaging in the CUBRC 48 Inch Shock Tunnel

    NASA Technical Reports Server (NTRS)

    Jiang, N.; Bruzzese, J.; Patton, R.; Sutton J.; Lempert W.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.; hide

    2011-01-01

    Nitric Oxide Planar Laser-Induced Fluorescence (NO PLIF) imaging is demonstrated at a 10 kHz repetition rate in the Calspan-University at Buffalo Research Center s (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single approx.10-millisecond duration run of the ground test facility. This represents over an order of magnitude improvement in data rate from previous PLIF-based diagnostic approaches. Comparison with a preliminary CFD simulation shows good overall qualitative agreement between the prediction of the mean NO density field and the observed PLIF image intensity, averaged over forty individual images obtained during several facility runs.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandersall, K S; Tarver, C M; Garcia, F

    Shock initiation experiments on the HMX based explosives LX-10 (95% HMX, 5% Viton by weight) and LX-07 (90% HMX, 10% Viton by weight) were performed to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive samples with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments and prior experiments on another HMX based explosive LX LX-04 (85% HMX, 15% Viton by weight) will be shown, discussed, and compared as a function of themore » binder content. This parameter set will provide additional information to ensure accurate code predictions for safety scenarios involving HMX explosives with different percent binder content additions.« less

  13. Analysis of stratospheric ozone, temperature, and minor constituent data

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Jackman, Charles H.; Kaye, Jack A.; Rood, Richard B.

    1990-01-01

    The objective of this research is to use available satellite measurements of temperature and constituent concentrations to test the conceptual picture of stratospheric chemistry and transport. This was originally broken down into two sub-goals: first, to use the constituent data to search for critical tests of our understanding of stratospheric chemistry and second, to examine constituent transport processes emphasizing interactions with chemistry on various time scales. A third important goal which has evolved is to use the available solar backscattered ultraviolet (SBUV) and Total Ozone Mapping Spectrometer (TOMS) data from Nimbus 7 to describe the morphology of recent changes in Antarctic and global ozone with emphasis on searching for constraints to theories. The major effort now being pursued relative to the two original goals is our effort as a theoretical team for the Arctic Airborne Stratospheric Expedition (AASE). Our effort for the AASE is based on the 3D transport and chemistry model at Goddard. Our goal is to use this model to place the results from the mission data in a regional and global context. Specifically, we set out to make model runs starting in late December and running through March of 1989, both with and without heterogeneous chemistry. The transport is to be carried out using dynamical fields from a 4D data assimilation model being developed under separate funding from this task. We have successfully carried out a series of single constituent transport experiments. One of the things demonstrated by these runs was the difficulty in obtaining observed low N2O abundances in the vortex without simultaneously obtaining very high ozone values. Because the runs start in late December, this difficulty arises in the attempt to define consistent initial conditions for the 3D model. To accomplish a consistent set of initial conditions, we are using the 2D photochemistry-transport model of Jackman and Douglass and mapping in potential temperature, potential vorticity space as developed by Schoeberl and coworkers.

  14. Voluntary Wheel Running in Mice.

    PubMed

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  15. Voluntary Wheel Running in Mice

    PubMed Central

    Goh, Jorming; Ladiges, Warren

    2015-01-01

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. The basic protocol consists of allowing mice to run freely on the open surface of a slanted plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured to a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. PMID:26629772

  16. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  17. Does financial development reduce environmental degradation? Evidence from a panel study of 129 countries.

    PubMed

    Al-Mulali, Usama; Tang, Chor Foon; Ozturk, Ilhan

    2015-10-01

    The purpose of this study is to explore the effect of financial development on CO2 emission in 129 countries classified by the income level. A panel CO2 emission model using urbanisation, GDP growth, trade openness, petroleum consumption and financial development variables that are major determinants of CO2 emission was constructed for the 1980-2011 period. The results revealed that the variables are cointegrated based on the Pedroni cointegration test. The dynamic ordinary least squares (OLS) and the Granger causality test results also show that financial development can improve environmental quality in the short run and long run due to its negative effect on CO2 emission. The rest of the determinants, especially petroleum consumption, are determined to be the major source of environmental damage in most of the income group countries. Based on the results obtained, the investigated countries should provide banking loans to projects and investments that can promote energy savings, energy efficiency and renewable energy to help these countries reduce environmental damage in both the short and long run.

  18. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  19. Anticipation by basketball defenders: an explanation based on the three-dimensional inverted pendulum model.

    PubMed

    Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo

    2014-01-01

    We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.

  20. To stock or not to stock? Assessing restoration potential of a remnant American shad spawning run with hatchery supplementation

    USGS Publications Warehouse

    Bailey, Michael M.; Zydlewski, Joseph D.

    2013-01-01

    Hatchery supplementation has been widely used as a restoration technique for American Shad Alosa sapidissima on the East Coast of the USA, but results have been equivocal. In the Penobscot River, Maine, dam removals and other improvements to fish passage will likely reestablish access to the majority of this species’ historic spawning habitat. Additional efforts being considered include the stocking of larval American Shad. The decision about whether to stock a river system undergoing restoration should be made after evaluating the probability of natural recolonization and examining the costs and benefits of potentially accelerating recovery using a stocking program. However, appropriate evaluation can be confounded by a dearth of information about the starting population size and age structure of the remnant American Shad spawning run in the river. We used the Penobscot River as a case study to assess the theoretical sensitivity of recovery time to either scenario (stocking or not) by building a deterministic model of an American Shad population. This model is based on the best available estimates of size at age, fecundity, rate of iteroparity, and recruitment. Density dependence was imposed, such that the population reached a plateau at an arbitrary recovery goal of 633,000 spawning adults. Stocking had a strong accelerating effect on the time to modeled recovery (as measured by the time to reach 50% of the recovery goal) in the base model, but stocking had diminishing effects with larger population sizes. There is a diminishing return to stocking when the starting population is modestly increased. With a low starting population (a spawning run of 1,000), supplementation with 12 million larvae annually accelerated modeled recovery by 12 years. Only a 2-year acceleration was observed if the starting population was 15,000. Such a heuristic model may aid managers in assessing the costs and benefits of stocking by incorporating a structured decision framework.

  1. Swing-leg trajectory of running guinea fowl suggests task-level priority of force regulation rather than disturbance rejection.

    PubMed

    Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A

    2014-01-01

    To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.

  2. Swing-Leg Trajectory of Running Guinea Fowl Suggests Task-Level Priority of Force Regulation Rather than Disturbance Rejection

    PubMed Central

    Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.

    2014-01-01

    To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750

  3. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    NASA Astrophysics Data System (ADS)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although the observed seasonal cycles were found to fall within the confidence limits of the ACCMIP members, this was because the model seasonal cycles spanned extremely wide ranges and there was no single ACCMIP member that performed best for each station. Further work is required to examine the parameterisation of convective mixing in the models to see if this erodes the isolation of the marine boundary layer from the free troposphere and thus hides the models' real ability to reproduce ozone seasonal cycles over marine stations.

  4. Calculating the renormalisation group equations of a SUSY model with Susyno

    NASA Astrophysics Data System (ADS)

    Fonseca, Renato M.

    2012-10-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features: Susyno contains functions that (a) calculate the Lagrangian of supersymmetric models and (b) calculate some group theoretical quantities. Some of these functions are available to the user and can be freely used. A built-in help system provides detailed information. Running time: Tests were made using a computer with an Intel Core i5 760 CPU, running under Ubuntu 11.04 and with Mathematica 8.0.1 installed. Using the option to suppress printing, the one- and two-loop beta functions of the MSSM were obtained in 2.5 s (NMSSM: 5.4 s). Note that the running time scales up very quickly with the total number of fields in the model. References: [1] S.P. Martin and M.T. Vaughn, Phys. Rev. D 50 (1994) 2282. [Erratum-ibid D 78 (2008) 039903] [arXiv:hep-ph/9311340]. [2] Y. Yamada, Phys. Rev. D 50 (1994) 3537 [arXiv:hep-ph/9401241].

  5. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    NASA Astrophysics Data System (ADS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  6. Influence of backup bearings and support structure dynamics on the behavior of rotors with active supports

    NASA Technical Reports Server (NTRS)

    Flowers, George T.

    1994-01-01

    Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.

  7. "Hit-and-Run" leaves its mark: catalyst transcription factors and chromatin modification.

    PubMed

    Varala, Kranthi; Li, Ying; Marshall-Colón, Amy; Para, Alessia; Coruzzi, Gloria M

    2015-08-01

    Understanding how transcription factor (TF) binding is related to gene regulation is a moving target. We recently uncovered genome-wide evidence for a "Hit-and-Run" model of transcription. In this model, a master TF "hits" a target promoter to initiate a rapid response to a signal. As the "hit" is transient, the model invokes recruitment of partner TFs to sustain transcription over time. Following the "run", the master TF "hits" other targets to propagate the response genome-wide. As such, a TF may act as a "catalyst" to mount a broad and acute response in cells that first sense the signal, while the recruited TF partners promote long-term adaptive behavior in the whole organism. This "Hit-and-Run" model likely has broad relevance, as TF perturbation studies across eukaryotes show small overlaps between TF-regulated and TF-bound genes, implicating transient TF-target binding. Here, we explore this "Hit-and-Run" model to suggest molecular mechanisms and its biological relevance. © 2015 The Authors. Bioessays published by WILEY Periodicals, Inc.

  8. ADHydro: A Parallel Implementation of a Large-scale High-Resolution Multi-Physics Distributed Water Resources Model Using the Charm++ Run Time System

    NASA Astrophysics Data System (ADS)

    Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.

    2014-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.

  9. Adaptation of time line analysis program to single pilot instrument flight research

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.; Shaughnessy, J. D.

    1978-01-01

    A data base was developed for SPIFR operation and the program was run. The outputs indicated that further work was necessary on the workload models. In particular, the workload model for the cognitive channel should be modified as the output workload appears to be too small. Included in the needed refinements are models to show the workload when in turbulence, when overshooting a radial or glideslope, and when copying air traffic control clearances.

  10. Testing a blowing snow model against distributed snow measurements at Upper Sheep Creek, Idaho, United States of America

    Treesearch

    Rajiv Prasad; David G. Tarboton; Glen E. Liston; Charles H. Luce; Mark S. Seyfried

    2001-01-01

    In this paper a physically based snow transport model (SnowTran-3D) was used to simulate snow drifting over a 30 m grid and was compared to detailed snow water equivalence (SWE) surveys on three dates within a small 0.25 km2 subwatershed, Upper Sheep Creek. Two precipitation scenarios and two vegetation scenarios were used to carry out four snow transport model runs in...

  11. DoD Lead System Integrator (LSI) Transformation - Creating a Model Based Acquisition Framework (MBAF)

    DTIC Science & Technology

    2014-04-30

    cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to

  12. EnOI-IAU Initialization Scheme Designed for Decadal Climate Prediction System IAP-DecPreS

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhou, Tianjun; Zheng, Fei

    2018-02-01

    A decadal climate prediction system named as IAP-DecPreS was constructed in the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, based on a fully coupled model FGOALS-s2 and a newly developed initialization scheme, referred to as EnOI-IAU. In this paper, we introduce the design of the EnOI-IAU scheme, assess the accuracies of initialization integrations using the EnOI-IAU and preliminarily evaluate hindcast skill of the IAP-DecPreS. The EnOI-IAU scheme integrates two conventional assimilation approaches, ensemble optimal interpolation (EnOI) and incremental analysis update (IAU). The EnOI and IAU were applied to calculate analysis increments and incorporate them into the model, respectively. Three continuous initialization (INIT) runs were conducted for the period of 1950-2015, in which observational sea surface temperature (SST) from the HadISST1.1 and subsurface ocean temperature profiles from the EN4.1.1 data set were assimilated. Then nine-member 10 year long hindcast runs initiated from the INIT runs were conducted for each year in the period of 1960-2005. The accuracies of the INIT runs are evaluated from the following three aspects: upper 700 m ocean temperature, temporal evolution of SST anomalies, and dominant interdecadal variability modes, Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO). Finally, preliminary evaluation of the ensemble mean of the hindcast runs suggests that the IAP-DecPreS has skill in the prediction of the PDO-related SST anomalies in the midlatitude North Pacific and AMO-related SST anomalies in the tropical North Atlantic.

  13. Can we trust climate models to realistically represent severe European windstorms?

    NASA Astrophysics Data System (ADS)

    Trzeciak, Tomasz M.; Knippertz, Peter; Pirret, Jennifer S. R.; Williams, Keith D.

    2016-06-01

    Cyclonic windstorms are one of the most important natural hazards for Europe, but robust climate projections of the position and the strength of the North Atlantic storm track are not yet possible, bearing significant risks to European societies and the (re)insurance industry. Previous studies addressing the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data show large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms, which could create compensating effects and therefore suggest higher reliability than there really is. This work aims to shed new light into this problem through a cost-effective "seamless" approach of hindcasting 20 historical severe storms with the two global climate models, ECHAM6 and GA4 configuration of the Met Office Unified Model, run in a numerical weather prediction mode using different lead times, and horizontal and vertical resolutions. These runs are then compared to re-analysis data. The main conclusions from this work are: (a) objectively identified cyclone tracks are represented satisfactorily by most hindcasts; (b) sensitivity to vertical resolution is low; (c) cyclone depth is systematically under-predicted for a coarse resolution of T63 by both climate models; (d) no systematic bias is found for the higher resolution of T127 out to about three days, demonstrating that climate models are in fact able to represent the complex dynamics of explosively deepening cyclones well, if given the correct initial conditions; (e) an analysis using a recently developed diagnostic tool based on the surface pressure tendency equation points to too weak diabatic processes, mainly latent heating, as the main source for the under-prediction in the coarse-resolution runs. Finally, an interesting implication of these results is that the too low number of deep cyclones in many free-running climate simulations may therefore be related to an insufficient number of storm-prone initial conditions. This question will be addressed in future work.

  14. Usefulness of running wheel for detection of congestive heart failure in dilated cardiomyopathy mouse model.

    PubMed

    Sugihara, Masami; Odagiri, Fuminori; Suzuki, Takeshi; Murayama, Takashi; Nakazato, Yuji; Unuma, Kana; Yoshida, Ken-ichi; Daida, Hiroyuki; Sakurai, Takashi; Morimoto, Sachio; Kurebayashi, Nagomi

    2013-01-01

    Inherited dilated cardiomyopathy (DCM) is a progressive disease that often results in death from congestive heart failure (CHF) or sudden cardiac death (SCD). Mouse models with human DCM mutation are useful to investigate the developmental mechanisms of CHF and SCD, but knowledge of the severity of CHF in live mice is necessary. We aimed to diagnose CHF in live DCM model mice by measuring voluntary exercise using a running wheel and to determine causes of death in these mice. A knock-in mouse with a mutation in cardiac troponin T (ΔK210) (DCM mouse), which results in frequent death with a t(1/2) of 70 to 90 days, was used as a DCM model. Until 2 months of age, average wheel-running activity was similar between wild-type and DCM mice (approximately 7 km/day). At approximately 3 months, some DCM mice demonstrated low running activity (LO: <1 km/day) while others maintained high running activity (HI: >5 km/day). In the LO group, the lung weight/body weight ratio was much higher than that in the other groups, and the lungs were infiltrated with hemosiderin-loaded alveolar macrophages. Furthermore, echocardiography showed more severe ventricular dilation and a lower ejection fraction, whereas Electrocardiography (ECG) revealed QRS widening. There were two patterns in the time courses of running activity before death in DCM mice: deaths with maintained activity and deaths with decreased activity. Our results indicate that DCM mice with low running activity developed severe CHF and that running wheels are useful for detection of CHF in mouse models. We found that approximately half of ΔK210 DCM mice die suddenly before onset of CHF, whereas others develop CHF, deteriorate within 10 to 20 days, and die.

  15. A mechanical model of metatarsal stress fracture during distance running.

    PubMed

    Gross, T S; Bunch, R P

    1989-01-01

    A model of metatarsal mechanics has been proposed as a link between the high incidence of second and third metatarsal stress fractures and the large stresses measured beneath the second and third metatarsal heads during distance running. Eight discrete piezoelectric vertical stress transducers were used to record the forefoot stresses of 21 male distance runners. Based upon load bearing area estimates derived from footprints, plantar forces were estimated. Highest force was estimated beneath the second and first metatarsal head (341.1 N and 279.1 N, respectively). Considering the toe as a hinged cantilever and the metatarsal as a proximally attached rigid cantilever allowed estimation of metatarsal midshaft bending strain, shear, and axial forces. Bending strain was estimated to be greatest in the second metatarsal (6662 mu epsilon), a value 6.9 times greater than estimated first metatarsal strain. Predicted third, fourth, and fifth metatarsal strains ranged between 4832 and 5241 mu epsilon. Shear force estimates were also greatest in the second metatarsal (203.0 N). Axial forces were highest in the first metatarsal (593.2 N) due to large hallux forces in relationship to the remaining toes. Although a first order model, these data highlight the structural demands placed upon the second metatarsal, a location of high metatarsal stress fracture incidence during distance running.

  16. Synthesis of Polysyllabic Sequences of Thai Tones Using a Generative Model of Fundamental Frequency Contours

    NASA Astrophysics Data System (ADS)

    Seresangtakul, Pusadee; Takara, Tomio

    In this paper, the distinctive tones of Thai in running speech are studied. We present rules to synthesize F0 contours of Thai tones in running speech by using the generative model of F0 contours. Along with our method, the pitch contours of Thai polysyllabic words, both disyllabic and trisyllabic words, were analyzed. The coarticulation effect of Thai tones in running speech were found. Based on the analysis of the polysyllabic words using this model, rules are derived and applied to synthesize Thai polysyllabic tone sequences. We performed listening tests to evaluate intelligibility of the rules for Thai tones generation. The average intelligibility scores became 98.8%, and 96.6% for disyllabic and trisyllabic words, respectively. From these result, the rule of the tones' generation was shown to be effective. Furthermore, we constructed the connecting rules to synthesize suprasegmental F0 contours using the trisyllable training rules' parameters. The parameters of the first, the third, and the second syllables were selected and assigned to the initial, the ending, and the remaining syllables in a sentence, respectively. Even such a simple rule, the synthesized phrases/senetences were completely identified in listening tests. The MOSs (Mean Opinion Score) was 3.50 while the original and analysis/synthesis samples were 4.82 and 3.59, respectively.

  17. On the use of tower-flux measurements to assess the performance of global ecosystem models

    NASA Astrophysics Data System (ADS)

    El Maayar, M.; Kucharik, C.

    2003-04-01

    Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.

  18. RNA 3D Structure Modeling by Combination of Template-Based Method ModeRNA, Template-Free Folding with SimRNA, and Refinement with QRNAS.

    PubMed

    Piatkowski, Pawel; Kasprzak, Joanna M; Kumar, Deepak; Magnus, Marcin; Chojnowski, Grzegorz; Bujnicki, Janusz M

    2016-01-01

    RNA encompasses an essential part of all known forms of life. The functions of many RNA molecules are dependent on their ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is laborious and challenging, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that either utilize information derived from known structures of other RNA molecules (by way of template-based modeling) or attempt to simulate the physical process of RNA structure formation (by way of template-free modeling). All computational methods suffer from various limitations that make theoretical models less reliable than high-resolution experimentally determined structures. This chapter provides a protocol for computational modeling of RNA 3D structure that overcomes major limitations by combining two complementary approaches: template-based modeling that is capable of predicting global architectures based on similarity to other molecules but often fails to predict local unique features, and template-free modeling that can predict the local folding, but is limited to modeling the structure of relatively small molecules. Here, we combine the use of a template-based method ModeRNA with a template-free method SimRNA. ModeRNA requires a sequence alignment of the target RNA sequence to be modeled with a template of the known structure; it generates a model that predicts the structure of a conserved core and provides a starting point for modeling of variable regions. SimRNA can be used to fold small RNAs (<80 nt) without any additional structural information, and to refold parts of models for larger RNAs that have a correctly modeled core. ModeRNA can be either downloaded, compiled and run locally or run through a web interface at http://genesilico.pl/modernaserver/ . SimRNA is currently available to download for local use as a precompiled software package at http://genesilico.pl/software/stand-alone/simrna and as a web server at http://genesilico.pl/SimRNAweb . For model optimization we use QRNAS, available at http://genesilico.pl/qrnas .

  19. Hindcasting the Madden‐Julian Oscillation With a New Parameterization of Surface Heat Fluxes

    PubMed Central

    Wang, Jingfeng; Lin, Wenshi

    2017-01-01

    Abstract The recently developed maximum entropy production (MEP) model, an alternative parameterization of surface heat fluxes, is incorporated into the Weather Research and Forecasting (WRF) model. A pair of WRF cloud‐resolving experiments (5 km grids) using the bulk transfer model (WRF default) and the MEP model of surface heat fluxes are performed to hindcast the October Madden‐Julian oscillation (MJO) event observed during the 2011 Dynamics of the MJO (DYNAMO) field campaign. The simulated surface latent and sensible heat fluxes in the MEP and bulk transfer model runs are in general consistent with in situ observations from two research vessels. Compared to the bulk transfer model, the convection envelope is strengthened in the MEP run and shows a more coherent propagation over the Maritime Continent. The simulated precipitable water in the MEP run is in closer agreement with the observations. Precipitation in the MEP run is enhanced during the active phase of the MJO with significantly reduced regional dry and wet biases. Large‐scale ocean evaporation is stronger in the MEP run leading to stronger boundary layer moistening to the east of the convection center, which facilitates the eastward propagation of the MJO. PMID:29399269

  20. Airport costs and production technology : a translog cost function analysis with implications for economic development.

    DOT National Transportation Integrated Search

    2011-07-01

    Based upon 50 large and medium hub airports over a 13 year period, this research estimates one and two : output translog models of airport short run operating costs. Output is passengers transported on non-stop : segments and pounds of cargo shipped....

  1. Airport costs and production technology : a translog cost function analysis with implications for economic development.

    DOT National Transportation Integrated Search

    2011-12-01

    Based upon 50 large and medium hub airports over a 13 year period, this research estimates one and two : output translog models of airport short run operating costs. Output is passengers transported on non-stop : segments and pounds of cargo shipped....

  2. FIAMODEL: Users Guide Version 3.0.

    Treesearch

    Scott A. Pugh; David D. Reed; Kurt S. Pregitzer; Patrick D. Miles

    2002-01-01

    FIAMODEL is a geographic information system (GIS program used to summarize Forest Inventory and Analysis (FIA, USDA Forest Service) data such as volume. The model runs in ArcView and allows users to select FIA plots with heads-up-digitizing, overlays of digital map layers, or queries based on specific plot attributes.

  3. PBL and the Postmodern Condition--Knowledge Production in University Education

    ERIC Educational Resources Information Center

    Ravn, Ole; Jensen, Annie Aarup

    2016-01-01

    In this article we discuss the contemporary conditions for running the Aalborg Problem Based Learning-model (PBL). We try to pinpoint key characteristics of these conditions emphasising Lyotard's conception of knowledge production referred to as the move towards a postmodern condition for knowledge. Through discussions of this alleged condition…

  4. Running around in Circles: Quality Assurance Reforms in Georgia

    ERIC Educational Resources Information Center

    Jibladze, Elene

    2013-01-01

    This article investigates the implementation of a quality assurance system in Georgia as a particular case of "Bologna transplant" in a transitioning country. In particular, the article discusses to what extent new concepts, institutions and models framed as "European" have been institutionalised in Georgia. Based on an outcome…

  5. Building an Integrated Environment for Multimedia

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Multimedia courseware on the solar system and earth science suitable for use in elementary, middle, and high schools was developed under this grant. The courseware runs on Silicon Graphics, Incorporated (SGI) workstations and personal computers (PCs). There is also a version of the courseware accessible via the World Wide Web. Accompanying multimedia database systems were also developed to enhance the multimedia courseware. The database systems accompanying the PC software are based on the relational model, while the database systems accompanying the SGI software are based on the object-oriented model.

  6. Passive Nosetip Technology (PANT) Program. Volume X. Summary of Experimental and Analytical Results

    DTIC Science & Technology

    1975-01-01

    Scallop Calorimeter Data with Sandgrain Type Calorimeter Data 3-22 4-1 Geometry for 1.5-Inch Nose Radius Camphor Model 4-3 4-2 Shape Profile History for... camphor model tested at Re. - 5.104/ft and t - 5 in the NOL hypersonic wind Tunnel Number S. (a) Run 007, Sting 2 -Graphite (b) PANT Run 204 - Camphor ...Laminar region (a) Run 006, Sting 2 -Graphite (b) PANT Run 216 - Camphor low temperature ablator Figure 2-2. Comparison of Transitional Shapes The

  7. Running of the scalar spectral index in bouncing cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehners, Jean-Luc; Wilson-Ewing, Edward, E-mail: jean-luc.lehners@aei.mpg.de, E-mail: wilson-ewing@aei.mpg.de

    We calculate the running of the scalar index in the ekpyrotic and matter bounce cosmological scenarios, and find that it is typically negative for ekpyrotic models, while it is typically positive for realizations of the matter bounce where multiple fields are present. This can be compared to inflation, where the observationally preferred models typically predict a negative running. The magnitude of the running is expected to be between 10{sup −4} and up to 10{sup −2}, leading in some cases to interesting expectations for near-future observations.

  8. Differences in Train-induced Vibration between Hard Soil and Soft Soil

    NASA Astrophysics Data System (ADS)

    Noyori, M.; Yokoyama, H.

    2017-12-01

    Vibration and noise caused by running trains sometimes raises environmental issues. Train-induced vibration is caused by moving static and dynamic axle loads. To reduce the vibration, it is important to clarify the conditions under which the train-induced vibration increases. In this study, we clarified the differences in train-induced vibration between on hard soil and on soft soil using a numerical simulation method. The numerical simulation method we used is a combination of two analysis. The one is a coupled vibration analysis model of a running train, a track and a supporting structure. In the analysis, the excitation force of the viaduct slabs generated by a running train is computed. The other analysis is a three-dimensional vibration analysis model of a supporting structure and the ground into which the excitation force computed by the former analysis is input. As a result of the numerical simulation, the ground vibration in the area not more than 25m from the center of the viaduct is larger under the soft soil condition than that under the hard soil condition in almost all frequency ranges. On the other hand, the ground vibration of 40 and 50Hz at a point 50m from the center of the viaduct under the hard soil condition is larger than that under the soft soil condition. These are consistent with the result of the two-dimensional FEM based on a ground model alone. Thus, we concluded that these results are obtained from not the effects of the running train but the vibration characteristics of the ground.

  9. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  10. Granularity as a Cognitive Factor in the Effectiveness of Business Process Model Reuse

    NASA Astrophysics Data System (ADS)

    Holschke, Oliver; Rake, Jannis; Levina, Olga

    Reusing design models is an attractive approach in business process modeling as modeling efficiency and quality of design outcomes may be significantly improved. However, reusing conceptual models is not a cost-free effort, but has to be carefully designed. While factors such as psychological anchoring and task-adequacy in reuse-based modeling tasks have been investigated, information granularity as a cognitive concept has not been at the center of empirical research yet. We hypothesize that business process granularity as a factor in design tasks under reuse has a significant impact on the effectiveness of resulting business process models. We test our hypothesis in a comparative study employing high and low granularities. The reusable processes provided were taken from widely accessible reference models for the telecommunication industry (enhanced Telecom Operations Map). First experimental results show that Recall in tasks involving coarser granularity is lower than in cases of finer granularity. These findings suggest that decision makers in business process management should be considerate with regard to the implementation of reuse mechanisms of different granularities. We realize that due to our small sample size results are not statistically significant, but this preliminary run shows that it is ready for running on a larger scale.

  11. Systematic approach to verification and validation: High explosive burn models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less

  12. Convergence properties of simple genetic algorithms

    NASA Technical Reports Server (NTRS)

    Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.

    1974-01-01

    The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.

  13. Two Blades-Up Runs Using the JetStream Navitus Atherectomy Device Achieve Optimal Tissue Debulking of Nonocclusive In-Stent Restenosis: Observations From a Porcine Stent/Balloon Injury Model.

    PubMed

    Shammas, Nicolas W; Aasen, Nicole; Bailey, Lynn; Budrewicz, Jay; Farago, Trent; Jarvis, Gary

    2015-08-01

    To determine the number of runs with blades up (BU) using the JetStream Navitus to achieving optimal debulking in a porcine model of femoropopliteal artery in-stent restenosis (ISR). In this porcine model, 8 limbs were implanted with overlapping nitinol self-expanding stents. ISR was treated initially with 2 blades-down (BD) runs followed by 4 BU runs (BU1 to BU4). Quantitative vascular angiography (QVA) was performed at baseline, after 2 BD runs, and after each BU run. Plaque surface area and percent stenosis within the treated stented segment were measured. Intravascular ultrasound (IVUS) was used to measure minimum lumen area (MLA) and determine IVUS-derived plaque surface area. QVA showed that plaque surface area was significantly reduced between baseline (83.9%±14.8%) and 2 BD (67.7%±17.0%, p=0.005) and BU1 (55.4%±9.0%, p=0.005) runs, and between BU1 and BU2 runs (50.7%±9.7%, p<0.05). Percent stenosis behaved similarly with no further reduction after BU2. There were no further reductions in plaque surface area or percent stenosis with BU 3 and 4 runs (p=0.10). Similarly, IVUS (24 lesions) confirmed optimal results with BU2 runs and no additional gain in MLA or reduction in plaque surface area with BU3 and 4. IVUS confirmed no orbital cutting with JetStream Navitus. There were no stent strut discontinuities on high-resolution radiographs following atherectomy. JetStream Navitus achieved optimal tissue debulking after 2 BD and 2 BU runs with no further statistical gain in debulking after the BU2 run. Operators treating ISR with JetStream Navitus may be advised to limit their debulking to 2 BD and 2 BU runs to achieve optimal debulking. © The Author(s) 2015.

  14. Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production

    NASA Astrophysics Data System (ADS)

    Elmasri, B.; Rahman, A. F.

    2010-12-01

    Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation will result in improved GPP predictions. Although there might be a room for improvements in our model outcomes through improved parameterization, our results suggest that such a methodology for running BIOME-BGC model based entirely on routinely available data can produce good predictions of GPP.

  15. Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications

    NASA Technical Reports Server (NTRS)

    Morris, Christopher I.; Ruf, Joseph H.

    2010-01-01

    Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).

  16. Sustainable Development: A Strategy for Regaining Control of Northern Mali

    DTIC Science & Technology

    2014-06-01

    informal attempts to conduct evasive maneuvers to achieve desired end results. The Project for National Security Reform argued that at times “… end runs...recognizing the internal borders that France established in the early twentieth century . Still, Model II optimally assigns projects based on... Project Design 4. In the end , Model I allocated the projects while addressing the following supplemental research questions posed in chapters I and

  17. Optimisation potential for a SBR plant based upon integrated modelling for dry and wet weather conditions.

    PubMed

    Rönner-Holm, S G E; Kaufmann Alves, I; Steinmetz, H; Holm, N C

    2009-01-01

    Integrated dynamic simulation analysis of a full-scale municipal sequential batch reactor (SBR) wastewater treatment plant (WWTP) was performed using the KOSMO pollution load simulation model for the combined sewer system (CSS) and the ASM3 + EAWAG-BioP model for the WWTP. Various optimising strategies for dry and storm weather conditions were developed to raise the purification and hydraulic performance and to reduce operation costs based on simulation studies with the calibrated WWTP model. The implementation of some strategies on the plant led to lower effluent values and an average annual saving of 49,000 euro including sewage tax, which is 22% of the total running costs. Dynamic simulation analysis of CSS for an increased WWTP influent over a period of one year showed high potentials for reducing combined sewer overflow (CSO) volume by 18-27% and CSO loads for COD by 22%, NH(4)-N and P(total) by 33%. In addition, the SBR WWTP could easily handle much higher influents without exceeding the monitoring values. During the integrated simulation of representative storm events, the total emission load for COD dropped to 90%, the sewer system emitted 47% less, whereas the pollution load in the WWTP effluent increased to only 14% with 2% higher running costs.

  18. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae).

    PubMed

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a 'delay phase', an 'increase phase' and a 'stagnation phase' were identified. In general, the runs of succession could be described by four different parameters: (1) 'Initial degradation level', (2) 'delay', (3) 'increase rate' and (4) 'recovery level'. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession.

  19. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  20. Non-linear structure formation in the `Running FLRW' cosmological model

    NASA Astrophysics Data System (ADS)

    Bibiano, Antonio; Croton, Darren J.

    2016-07-01

    We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.

  1. An abstract model of rogue code insertion into radio frequency wireless networks. The effects of computer viruses on the Program Management Office

    NASA Astrophysics Data System (ADS)

    Feudo, Christopher V.

    1994-04-01

    This dissertation demonstrates that inadequately protected wireless LANs are more vulnerable to rogue program attack than traditional LANs. Wireless LANs not only run the same risks as traditional LANs, but they also run additional risks associated with an open transmission medium. Intruders can scan radio waves and, given enough time and resources, intercept, analyze, decipher, and reinsert data into the transmission medium. This dissertation describes the development and instantiation of an abstract model of the rogue code insertion process into a DOS-based wireless communications system using radio frequency (RF) atmospheric signal transmission. The model is general enough to be applied to widely used target environments such as UNIX, Macintosh, and DOS operating systems. The methodology and three modules, the prober, activator, and trigger modules, to generate rogue code and insert it into a wireless LAN were developed to illustrate the efficacy of the model. Also incorporated into the model are defense measures against remotely introduced rogue programs and a cost-benefit analysis that determined that such defenses for a specific environment were cost-justified.

  2. A simulation model for studying the role of pre-slaughter factors on the exposure of beef carcasses to human microbial hazards.

    PubMed

    Jordan, D; McEwen, S A; Lammerding, A M; McNab, W B; Wilson, J B

    1999-06-29

    A Monte Carlo simulation model was constructed for assessing the quantity of microbial hazards deposited on cattle carcasses under different pre-slaughter management regimens. The model permits comparison of industry-wide and abattoir-based mitigation strategies and is suitable for studying pathogens such as Escherichia coli O157:H7 and Salmonella spp. Simulations are based on a hierarchical model structure that mimics important aspects of the cattle population prior to slaughter. Stochastic inputs were included so that uncertainty about important input assumptions (such as prevalence of a human pathogen in the live cattle-population) would be reflected in model output. Control options were built into the model to assess the benefit of having prior knowledge of animal or herd-of-origin pathogen status (obtained from the use of a diagnostic test). Similarly, a facility was included for assessing the benefit of re-ordering the slaughter sequence based on the extent of external faecal contamination. Model outputs were designed to evaluate the performance of an abattoir in a 1-day period and included outcomes such as the proportion of carcasses contaminated with a pathogen, the daily mean and selected percentiles of pathogen counts per carcass, and the position of the first infected animal in the slaughter run. A measure of the time rate of introduction of pathogen into the abattoir was provided by assessing the median, 5th percentile, and 95th percentile cumulative pathogen counts at 10 equidistant points within the slaughter run. Outputs can be graphically displayed as frequency distributions, probability densities, cumulative distributions or x-y plots. The model shows promise as an inexpensive method for evaluating pathogen control strategies such as those forming part of a Hazard Analysis and Critical Control Point (HACCP) system.

  3. Radial diffusion comparing a THEMIS statistical model with geosynchronous measurements as the outer boundary

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hudson, M. K.; Chen, Y.

    2013-12-01

    The outer boundary energetic electron flux is used as a driver in radial diffusion calculations, and its precise determination is critical to the solution. A new model was proposed recently based on THEMIS measurements to express the boundary flux as three fit functions of solar wind parameters in a response window, that depend on energy and which solar parameter is used: speed, density, or both (Shin and Lee, 2013). The Dartmouth radial diffusion model has been run using LANL geosynchronous satellite measurements as the outer boundary for a one-month interval in July to August 2004 and the calculated phase space density (PSD) is compared with GPS measurements at the GPS orbit (L=4.16), at magnetic equatorial plane crossings, as a test of the model. We also used the outer boundary generated from the Shin and Lee model and examined this boundary condition by computing the error relative to the simulation using a LANL geosynchronous spacecraft data-driven outer boundary. The calculation shows that there is overestimation and underestimation at different times, however the new boundary condition can be used to drive the radial diffusion model generally, producing the phase space density increase and dropout during a storm with a relatively small error. Having this new method based on a solar wind parametrized data set, we can run the radial diffusion model for storms when particle measurements are not available at the outer boundary. We chose the Whole Heliosphere Interval (WHI) as an example and compared the result with MHD/test-particle simulations (Hudson et al., 2012), obtaining much better agreement with PSD based on GPS measurements at L=4.16 using the diffusion model, which incorporates atmospheric losses.

  4. Effects of Performance Versus Game-Based Mobile Applications on Response to Exercise.

    PubMed

    Gillman, Arielle S; Bryan, Angela D

    2016-02-01

    Given the popularity of mobile applications (apps) designed to increase exercise participation, it is important to understand their effects on psychological predictors of exercise behavior. This study tested a performance feedback-based app compared to a game-based app to examine their effects on aspects of immediate response to an exercise bout. Twenty-eight participants completed a 30-min treadmill run while using one of two randomly assigned mobile running apps: Nike + Running, a performance-monitoring app which theoretically induces an associative, goal-driven state, or Zombies Run!, an app which turns the experience of running into a virtual reality game, theoretically inducing dissociation from primary exercise goals. The two conditions did not differ on primary motivational state outcomes; however, participants reported more associative attentional focus in the performance-monitoring app condition compared to more dissociative focus in the game-based app condition. Game-based and performance-tracking running apps may not have differential effects on goal motivation during exercise. However, game-based apps may help recreational exercisers dissociate from exercise more readily. Increasing the enjoyment of an exercise bout through the development of new and innovative mobile technologies is an important avenue for future research.

  5. Operational on-line coupled chemical weather forecasts for Europe with WRF/Chem

    NASA Astrophysics Data System (ADS)

    Hirtl, Marcus; Mantovani, Simone; Krüger, Bernd C.; Flandorfer, Claudia; Langer, Matthias

    2014-05-01

    Air quality is a key element for the well-being and quality of life of European citizens. Air pollution measurements and modeling tools are essential for the assessment of air quality according to EU legislation. The responsibilities of ZAMG as the national weather service of Austria include the support of the federal states and the public in questions connected to the protection of the environment in the frame of advisory and counseling services as well as expert opinions. ZAMG conducts daily Air-Quality forecasts using the on-line coupled model WRF/Chem. Meteorology is simulated simultaneously with the emissions, turbulent mixing, transport, transformation, and fate of trace gases and aerosols. The emphasis of the application is on predicting pollutants over Austria. Two domains are used for the simulations: the mother domain covers Europe with a resolution of 12 km, the inner domain includes the alpine region with a horizontal resolution of 4 km; 45 model levels are used in the vertical direction. The model runs 2 times per day for a period of 72 hours and is initialized with ECMWF forecasts. On-line coupled models allow considering two-way interactions between different atmospheric processes including chemistry (both gases and aerosols), clouds, radiation, boundary layer, emissions, meteorology and climate. In the operational set-up direct-, indirect and semi-direct effects between meteorology and air chemistry are enabled. The model is running on the HPCF (High Performance Computing Facility) of the ZAMG. In the current set-up 1248 CPUs are used. As the simulations need a big amount of computing resources, a method to safe I/O-time was implemented. Every MPI task writes all its output into the shared memory filesystem of the compute nodes. Once the WRF/Chem integration is finished, all split NetCDF-files are merged and saved on the global file system. The merge-routine is based on parallel-NetCDF. With this method the model runs about 30% faster on the SGI-ICEX. Different additional external data sources can be used to improve the forecasts. Satellite measurements of the Aerosol Optical Thickness (AOT) and ground-based PM10-measurements are combined to highly-resolved initial fields using regression- and assimilation techniques. The available local emission inventories provided by the different Austrian regional governments were harmonized and are used for the model simulations. A model evaluation for a selected episode in February 2010 is presented with respect to PM10 forecasts. During that month exceedances of PM10-thresholds occurred at many measurement stations of the Austrian network. Different model runs (only model/only ground stations assimilated/satellite and ground stations assimilated) are compared to the respective measurements.

  6. NO PLIF imaging in the CUBRC 48-inch shock tunnel

    NASA Astrophysics Data System (ADS)

    Jiang, N.; Bruzzese, J.; Patton, R.; Sutton, J.; Yentsch, R.; Gaitonde, D. V.; Lempert, W. R.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.; Danehy, P. M.

    2012-12-01

    Nitric oxide planar laser-induced fluorescence (NO PLIF) imaging is demonstrated at a 10-kHz repetition rate in the Calspan University at Buffalo Research Center's (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single ~10-millisecond duration run of the ground test facility. Comparison with a CFD simulation shows good overall qualitative agreement in the jet penetration and spreading observed with an average of forty individual PLIF images obtained during several facility runs.

  7. Towards Compensation Correctness in Interactive Systems

    NASA Astrophysics Data System (ADS)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  8. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  9. PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC

    USGS Publications Warehouse

    Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.

    1997-01-01

    PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.

  10. Adaptive real-time methodology for optimizing energy-efficient computing

    DOEpatents

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  11. A falsely fat curvaton with an observable running of the spectral tilt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peloso, Marco; Sorbo, Lorenzo; Tasinato, Gianmassimo, E-mail: peloso@physics.umn.edu, E-mail: sorbo@physics.umass.edu, E-mail: gianmassimo.tasinato@port.ac.uk

    2014-06-01

    In slow roll inflation, the running of the spectral tilt is generically proportional to the square of the deviation from scale invariance, α{sub s}∝(n{sub s}−1){sup 2}, and is therefore currently undetectable. We present a mechanism able to generate a much larger running within slow roll. The mechanism is based on a curvaton field with a large mass term, and a time evolving normalization. This may happen for instance to the angular direction of a complex field in presence of an evolving radial direction. At the price of a single tuning between the mass term and the rate of change ofmore » the normalization, the curvaton can be made effectively light at the CMB scales, giving a spectral tilt in agreement with observations. The lightness is not preserved at later times, resulting in a detectable running of the spectral tilt. This mechanism shows that fields with a large mass term do not necessarily decouple from the inflationary physics, and provides a new tool for model building in inflation.« less

  12. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    NASA Astrophysics Data System (ADS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-11-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.

  13. 40 CFR 600.507-08 - Running change data requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600.507-08 Running...

  14. 40 CFR 600.507-86 - Running change data requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600.507-86 Running...

  15. RHIC Au beam in Run 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S. Y.

    Au beam at the RHIC ramp in run 2014 is reviewed together with the run 2011 and run 2012. Observed bunch length and longitudinal emittance are compared with the IBS simulations. The IBS growth rate of the longitudinal emittance in run 2014 is similar to run 2011, and both are larger than run 2012. This is explained by the large transverse emittance at high intensity observed in run 2012, but not in run 2014. The big improvement of the AGS ramping in run 2014 might be related to this change. The importance of the injector intensity improvement in run 2014more » is emphasized, which gives rise to the initial luminosity improvement of 50% in run 2014, compared with the previous Au-Au run 2011. In addition, a modified IBS model, which is calibrated using the RHIC Au runs from 9.8 GeV/n to 100 GeV/n, is presented and used in the study.« less

  16. Advancing the Implementation of Hydrologic Models as Web-based Applications

    NASA Astrophysics Data System (ADS)

    Dahal, P.; Tarboton, D. G.; Castronova, A. M.

    2017-12-01

    Advanced computer simulations are required to understand hydrologic phenomenon such as rainfall-runoff response, groundwater hydrology, snow hydrology, etc. Building a hydrologic model instance to simulate a watershed requires investment in data (diverse geospatial datasets such as terrain, soil) and computer resources, typically demands a wide skill set from the analyst, and the workflow involved is often difficult to reproduce. This work introduces a web-based prototype infrastructure in the form of a web application that provides researchers with easy to use access to complete hydrological modeling functionality. This includes creating the necessary geospatial and forcing data, preparing input files for a model by applying complex data preprocessing, running the model for a user defined watershed, and saving the results to a web repository. The open source Tethys Platform was used to develop the web app front-end Graphical User Interface (GUI). We used HydroDS, a webservice that provides data preparation processing capability to support backend computations used by the app. Results are saved in HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. The TOPographic Kinematic APproximation and Integration (TOPKAPI) model served as the example for which we developed a complete hydrologic modeling service to demonstrate the approach. The final product is a complete modeling system accessible through the web to create input files, and run the TOPKAPI hydrologic model for a watershed of interest. We are investigating similar functionality for the preparation of input to Regional Hydro-Ecological Simulation System (RHESSys). Key Words: hydrologic modeling, web services, hydrologic information system, HydroShare, HydroDS, Tethys Platform

  17. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  18. 77 FR 16892 - BMW of North America, LLC, Grant of Petition for Decision of Inconsequential Noncompliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... certain BMW vehicles equipped with ``run-flat'' tires do not fully comply with paragraphs S4.3(c) and S4.3... equipped with ``run flat'' tires are affected. The affected vehicle models are certain: Model Year 2008... equipped with ``run-flat'' tires and have no spare tire, the word ``none,'' as required by paragraphs S4.3...

  19. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  20. Users guide: The LaRC human-operator-simulator-based pilot model

    NASA Technical Reports Server (NTRS)

    Bogart, E. H.; Waller, M. C.

    1985-01-01

    A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.

  1. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGES

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  2. Treadmill based reference running data for healthy subjects is dependent on speed and morphological parameters.

    PubMed

    Schulze, Stephan; Schwesig, René; Edel, Melanie; Fieseler, Georg; Delank, Karl-Stefan; Hermassi, Souhail; Laudner, Kevin G

    2017-10-01

    To obtain spatiotemporal and dynamic running parameters of healthy participants and to identify relationships between running parameters, speed, and physical characteristics. A dynamometric treadmill was used to collect running data among 417 asymptomatic subjects during speeds ranging from 10 to 24km/h. Spatiotemporal and dynamic running parameters were calculated and measured. Results of the analyses showed that assessing running parameters is dependent on running speed. Body height correlated with stride length (r=0.5), cadence (r=-0.5) and plantar forefoot force (r=0.6). Body mass also had a strong relationship to plantar forefoot forces at 14 and 24km/h and plantar midfoot forces at 14 and 24km/h. This reference data base can be used in the kinematic and kinetic evaluation of running under a wide range of speeds. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Gaussian approximation potential modeling of lithium intercalation in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Fujikake, So; Deringer, Volker L.; Lee, Tae Hoon; Krynski, Marcin; Elliott, Stephen R.; Csányi, Gábor

    2018-06-01

    We demonstrate how machine-learning based interatomic potentials can be used to model guest atoms in host structures. Specifically, we generate Gaussian approximation potential (GAP) models for the interaction of lithium atoms with graphene, graphite, and disordered carbon nanostructures, based on reference density functional theory data. Rather than treating the full Li-C system, we demonstrate how the energy and force differences arising from Li intercalation can be modeled and then added to a (prexisting and unmodified) GAP model of pure elemental carbon. Furthermore, we show the benefit of using an explicit pair potential fit to capture "effective" Li-Li interactions and to improve the performance of the GAP model. This provides proof-of-concept for modeling guest atoms in host frameworks with machine-learning based potentials and in the longer run is promising for carrying out detailed atomistic studies of battery materials.

  4. Automated system for smoke dispersion prediction due to wild fires in Alaska

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Stuefer, M.; Higbie, L.; Newby, G.

    2007-12-01

    Community climate models have enabled development of specific environmental forecast systems. The University of Alaska (UAF) smoke group was created to adapt a smoke forecast system to the Alaska region. The US Forest Service (USFS) Missoula Fire Science Lab had developed a smoke forecast system based on the Weather Research and Forecasting (WRF) Model including chemistry (WRF/Chem). Following the successful experience of USFS, which runs their model operationally for the contiguous U.S., we develop a similar system for Alaska in collaboration with scientists from the USFS Missoula Fire Science Lab. Wildfires are a significant source of air pollution in Alaska because the climate and vegetation favor annual summer fires that burn huge areas. Extreme cases occurred in 2004, when an area larger than Maryland (more than 25000~km2) burned. Small smoke particles with a diameter less than 10~μm can penetrate deep into lungs causing health problems. Smoke also creates a severe restriction to air transport and has tremendous economical effect. The smoke dispersion and forecast system for Alaska was developed at the Geophysical Institute (GI) and the Arctic Region Supercomputing Center (ARSC), both at University of Alaska Fairbanks (UAF). They will help the public and plan activities a few days in advance to avoid dangerous smoke exposure. The availability of modern high performance supercomputers at ARSC allows us to create and run high-resolution, WRF-based smoke dispersion forecast for the entire State of Alaska. The core of the system is a Python program that manages the independent pieces. Our adapted Alaska system performs the following steps \\begin{itemize} Calculate the medium-resolution weather forecast using WRF/Met. Adapt the near real-time satellite-derived wildfire location and extent data that are received via direct broadcast from UAF's "Geographic Information Network of Alaska" (GINA) Calculate fuel moisture using WRF forecasts and National Fire Danger Rating System (NFDRS) fuel maps Calculate smoke emission components using a first order fire emission model Model the smoke plume rise yielding a vertically distribution that accounts for one-dimensional (vertical) concentrations of smoke constituents in the atmosphere above the fire Run WRF/Chem at high resolution for the forecast Use standard graphical tools to provide accessible smoke dispersion The system run twice each day at ARSC. The results will be freely available from a dedicated wildfire smoke web portal at ARSC.

  5. Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecale Zhou, Carol

    2016-01-03

    This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication

  6. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  7. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    DTIC Science & Technology

    2014-03-01

    30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Sin 16. SECURITY CLASSIFICATION OF: UU a. REPORT b...projectile and are modeled using SPH elements in AutoDyn □ Center strike model validation runs with SiC tiles are conducted based on the DOP...Smoothed-particle hydrodynamics ( SPH ) used for all parts, SPH Size = 0.2 3 SiC and SiC 2 are identical in properties and dimensions

  8. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  9. Analyzing the causation of a railway accident based on a complex network

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Li, Ke-Ping; Luo, Zi-Yan; Zhou, Jin

    2014-02-01

    In this paper, a new model is constructed for the causation analysis of railway accident based on the complex network theory. In the model, the nodes are defined as various manifest or latent accident causal factors. By employing the complex network theory, especially its statistical indicators, the railway accident as well as its key causations can be analyzed from the overall perspective. As a case, the “7.23” China—Yongwen railway accident is illustrated based on this model. The results show that the inspection of signals and the checking of line conditions before trains run played an important role in this railway accident. In conclusion, the constructed model gives a theoretical clue for railway accident prediction and, hence, greatly reduces the occurrence of railway accidents.

  10. A logistic model of the effects of roadway, environmental, vehicle, crash and driver characteristics on hit-and-run crashes.

    PubMed

    Tay, Richard; Rifaat, Shakil Mohammad; Chin, Hoong Chor

    2008-07-01

    Leaving the scene of a crash without reporting it is an offence in most countries and many studies have been devoted to improving ways to identify hit-and-run vehicles and the drivers involved. However, relatively few studies have been conducted on identifying factors that contribute to the decision to run after the crash. This study identifies the factors that are associated with the likelihood of hit-and-run crashes including driver characteristics, vehicle types, crash characteristics, roadway features and environmental characteristics. Using a logistic regression model to delineate hit-and-run crashes from nonhit-and-run crashes, this study found that drivers were more likely to run when crashes occurred at night, on a bridge and flyover, bend, straight road and near shop houses; involved two vehicles, two-wheel vehicles and vehicles from neighboring countries; and when the driver was a male, minority, and aged between 45 and 69. On the other hand, collisions involving right turn and U-turn maneuvers, and occurring on undivided roads were less likely to be hit-and-run crashes.

  11. Running Parallel Discrete Event Simulators on Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, P. D.; Jefferson, D. R.

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  12. An Object-Oriented Python Implementation of an Intermediate-Level Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.

    2008-12-01

    The Neelin-Zeng Quasi-equilibrium Tropical Circulation Model (QTCM1) is a Fortran-based intermediate-level atmospheric model that includes simplified treatments of several physical processes, including a GCM-like convective scheme and a land-surface scheme with representations of different surface types, evaporation, and soil moisture. This model has been used in studies of the Madden-Julian oscillation, ENSO, and vegetation-atmosphere interaction effects on climate. Through the assumption of convective quasi-equilibrium in the troposphere, the QTCM1 is able to include full nonlinearity, resolve baroclinic disturbances, and generate a reasonable climatology, all at low computational cost. One year of simulation on a PC at 5.625 × 3.75 degree longitude-latitude resolution takes under three minutes of wall-clock time. The Python package qtcm implements the QTCM1 in a mixed-language environment that retains the speed of compiled Fortran while providing the benefits of Python's object-oriented framework and robust suite of utilities and datatypes. We describe key programming constructs used to create this modeling environment: the decomposition of model runs into Python objects, providing methods so visualization tools are attached to model runs, and the use of Python's mutable datatypes (lists and dictionaries) to implement the "run list" entity, which enables total runtime control of subroutine execution order and content. The result is an interactive modeling environment where the traditional sequence of "hypothesis → modeling → visualization and analysis" is opened up and made nonlinear and flexible. In this environment, science tasks such as parameter-space exploration and testing alternative parameterizations can be easily automated, without the need for multiple versions of the model code interacting with a bevy of makefiles and shell scripts. The environment also simplifies interfacing of the atmospheric model to other models (e.g., hydrologic models, statistical models) and analysis tools. The tools developed for this package can be adapted to create similar environments for hydrologic models.

  13. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    NASA Technical Reports Server (NTRS)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  14. Satellite Data Processing System (SDPS) users manual V1.0

    NASA Technical Reports Server (NTRS)

    Caruso, Michael; Dunn, Chris

    1989-01-01

    SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.

  15. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  16. On the control of riverbed incision induced by run-of-river power plant

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Dinh, Quang; Bernardi, Dario; Denaro, Simona; Schippa, Leonardo; Soncini-Sessa, Rodolfo

    2015-07-01

    Water resource management (WRM) through dams or reservoirs is worldwide necessary to support key human-related activities, ranging from hydropower production to water allocation and flood risk mitigation. Designing of reservoir operations aims primarily to fulfill the main purpose (or purposes) for which the structure has been built. However, it is well known that reservoirs strongly influence river geomorphic processes, causing sediment deficits downstream, altering water, and sediment fluxes, leading to riverbed incision and causing infrastructure instability and ecological degradation. We propose a framework that, by combining physically based modeling, surrogate modeling techniques, and multiobjective (MO) optimization, allows to include fluvial geomorphology into MO optimization whose main objectives are the maximization of hydropower revenue and the minimization of riverbed degradation. The case study is a run-of-the-river power plant on the River Po (Italy). A 1-D mobile-bed hydro-morphological model simulated the riverbed evolution over a 10 year horizon for alternatives operation rules of the power plant. The knowledge provided by such a physically based model is integrated into a MO optimization routine via surrogate modeling using the response surface methodology. Hence, this framework overcomes the high computational costs that so far hindered the integration of river geomorphology into WRM. We provided numerical proof that river morphologic processes and hydropower production are indeed in conflict but that the conflict may be mitigated with appropriate control strategies.

  17. Shoe cushioning, body mass and running biomechanics as risk factors for running injury: a study protocol for a randomised controlled trial

    PubMed Central

    Malisoux, Laurent; Delattre, Nicolas; Urhausen, Axel; Theisen, Daniel

    2017-01-01

    Introduction Repetitive loading of the musculoskeletal system is suggested to be involved in the underlying mechanism of the majority of running-related injuries (RRIs). Accordingly, heavier runners are assumed to be at a higher risk of RRI. The cushioning system of modern running shoes is expected to protect runners again high impact forces, and therefore, RRI. However, the role of shoe cushioning in injury prevention remains unclear. The main aim of this study is to investigate the influence of shoe cushioning and body mass on RRI risk, while exploring simultaneously the association between running technique and RRI risk. Methods and analysis This double-blinded randomised controlled trial will involve about 800 healthy leisure-time runners. They will randomly receive one of two running shoe models that will differ in their cushioning properties (ie, stiffness) by ~35%. The participants will perform a running test on an instrumented treadmill at their preferred running speed at baseline. Then they will be followed up prospectively over a 6-month period, during which they will self-report all their sports activities as well as any injury in an internet-based database TIPPS (Training and Injury Prevention Platform for Sports). Cox regression analyses will be used to compare injury risk between the study groups and to investigate the association among training, biomechanical and anatomical risk factors, and injury risk. Ethics and dissemination The study was approved by the National Ethics Committee for Research (Ref: 201701/02 v1.1). Outcomes will be disseminated through publications in peer-reviewed journals, presentations at international conferences, as well as articles in popular magazines and on specialised websites. Trial registration number NCT03115437, Pre-results. PMID:28827268

  18. Investigation on the Practicality of Developing Reduced Thermal Models

    NASA Technical Reports Server (NTRS)

    Lombardi, Giancarlo; Yang, Kan

    2015-01-01

    Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.

  19. High-volume manufacturing device overlay process control

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Lee, DongYoung; Song, ChangRock; Heo, Hoyoung; Brinster, Irina; Choi, DongSub; Robinson, John C.

    2017-03-01

    Overlay control based on DI metrology of optical targets has been the primary basis for run-to-run process control for many years. In previous work we described a scenario where optical overlay metrology is performed on metrology targets on a high frequency basis including every lot (or most lots) at DI. SEM based FI metrology is performed ondevice in-die as-etched on an infrequent basis. Hybrid control schemes of this type have been in use for many process nodes. What is new is the relative size of the NZO as compared to the overlay spec, and the need to find more comprehensive solutions to characterize and control the size and variability of NZO at the 1x nm node: sampling, modeling, temporal frequency and control aspects, as well as trade-offs between SEM throughput and accuracy.

  20. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

Top