Sample records for model time step

  1. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  2. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  3. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  4. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  5. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  6. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  7. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  8. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  9. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    PubMed

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  10. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  11. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  12. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  14. Short‐term time step convergence in a climate model

    PubMed Central

    Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane

    2015-01-01

    Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669

  15. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE PAGES

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  16. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sungduk; Pritchard, Michael S.

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  17. Spatial and Temporal Control Contribute to Step Length Asymmetry during Split-Belt Adaptation and Hemiparetic Gait

    PubMed Central

    Finley, James M.; Long, Andrew; Bastian, Amy J.; Torres-Oviedo, Gelsy

    2014-01-01

    Background Step length asymmetry (SLA) is a common hallmark of gait post-stroke. Though conventionally viewed as a spatial deficit, SLA can result from differences in where the feet are placed relative to the body (spatial strategy), the timing between foot-strikes (step time strategy), or the velocity of the body relative to the feet (step velocity strategy). Objective The goal of this study was to characterize the relative contributions of each of these strategies to SLA. Methods We developed an analytical model that parses SLA into independent step position, step time, and step velocity contributions. This model was validated by reproducing SLA values for twenty-five healthy participants when their natural symmetric gait was perturbed on a split-belt treadmill moving at either a 2:1 or 3:1 belt-speed ratio. We then applied the validated model to quantify step position, step time, and step velocity contributions to SLA in fifteen stroke survivors while walking at their self-selected speed. Results SLA was predicted precisely by summing the derived contributions, regardless of the belt-speed ratio. Although the contributions to SLA varied considerably across our sample of stroke survivors, the step position contribution tended to oppose the other two – possibly as an attempt to minimize the overall SLA. Conclusions Our results suggest that changes in where the feet are placed or changes in interlimb timing could be used as compensatory strategies to reduce overall SLA in stroke survivors. These results may allow clinicians and researchers to identify patient-specific gait abnormalities and personalize their therapeutic approaches accordingly. PMID:25589580

  18. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  19. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    NASA Astrophysics Data System (ADS)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  20. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1/24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  1. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  2. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  3. Unsteady Crystal Growth Due to Step-Bunch Cascading

    NASA Technical Reports Server (NTRS)

    Vekilov, Peter G.; Lin, Hong; Rosenberger, Franz

    1997-01-01

    Based on our experimental findings of growth rate fluctuations during the crystallization of the protein lysozym, we have developed a numerical model that combines diffusion in the bulk of a solution with diffusive transport to microscopic growth steps that propagate on a finite crystal facet. Nonlinearities in layer growth kinetics arising from step interaction by bulk and surface diffusion, and from step generation by surface nucleation, are taken into account. On evaluation of the model with properties characteristic for the solute transport, and the generation and propagation of steps in the lysozyme system, growth rate fluctuations of the same magnitude and characteristic time, as in the experiments, are obtained. The fluctuation time scale is large compared to that of step generation. Variations of the governing parameters of the model reveal that both the nonlinearity in step kinetics and mixed transport-kinetics control of the crystallization process are necessary conditions for the fluctuations. On a microscopic scale, the fluctuations are associated with a morphological instability of the vicinal face, in which a step bunch triggers a cascade of new step bunches through the microscopic interfacial supersaturation distribution.

  4. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    NASA Astrophysics Data System (ADS)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  5. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  6. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  7. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  8. Quantitative analysis of the thermal requirements for stepwise physical dormancy-break in seeds of the winter annual Geranium carolinianum (Geraniaceae)

    PubMed Central

    Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.

    2013-01-01

    Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728

  9. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  10. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  11. Partition-based discrete-time quantum walks

    NASA Astrophysics Data System (ADS)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  12. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    NASA Astrophysics Data System (ADS)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  13. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    NASA Astrophysics Data System (ADS)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  14. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.

  15. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  16. Review of Real-Time Simulator and the Steps Involved for Implementation of a Model from MATLAB/SIMULINK to Real-Time

    NASA Astrophysics Data System (ADS)

    Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi

    2015-06-01

    Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.

  17. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  18. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  19. Representation of Nucleation Mode Microphysics in a Global Aerosol Model with Sectional Microphysics

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Pierce, J. R.; Adams, P. J.

    2013-01-01

    In models, nucleation mode (1 nm

  20. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  1. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  2. Modeling fatigue.

    PubMed

    Sumner, Walton; Xu, Jin Zhong

    2002-01-01

    The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. Overcoming the detection bandwidth limit in precision spectroscopy: The analytical apparatus function for a stepped frequency scan

    NASA Astrophysics Data System (ADS)

    Rohart, François

    2017-01-01

    In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.

  5. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  6. Australia's marine virtual laboratory

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe

    2014-05-01

    In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.

  7. The constant displacement scheme for tracking particles in heterogeneous aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less

  8. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  9. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  10. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  11. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  12. Operational flood control of a low-lying delta system using large time step Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick

    2015-01-01

    The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.

  13. Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  14. Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  15. Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Chiaming; Lin, Tungyou; Caflisch, Russel

    2008-04-20

    The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; Nichols III, A L

    The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less

  17. Development of a real time activity monitoring Android application utilizing SmartStep.

    PubMed

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  18. 12-step participation and outcomes over 7 years among adolescent substance use patients with and without psychiatric comorbidity.

    PubMed

    Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I; Weisner, Constance

    2013-01-01

    This study examines the associations between 12-step participation and outcomes over 7 years among 419 adolescent substance use patients with and without psychiatric comorbidities. Although level of participation decreased over time for both groups, comorbid adolescents participated in 12-step groups at comparable or higher levels across time points. Results from mixed-effects logistic regression models indicated that for both groups, 12-step participation was associated with both alcohol and drug abstinence at follow-ups, increasing the likelihood of either by at least 3 times. Findings highlight the potential benefits of 12-step participation in maintaining long-term recovery for adolescents with and without psychiatric disorders.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, William L; Gunderson, Jake A; Dickson, Peter M

    There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problemmore » of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the kinetics and thermodynamics of the phase transition, providing Dickson, et al. with the information necessary to develop a four-step model that included a two-step nucleation and growth mechanism for the {beta}-{delta} phase transition. Initially, an irreversible scheme was proposed. That model accurately predicted the spatial and temporal cook off behavior of the small-scale radial experiment under slow heating conditions, but did not accurately capture the endothermic phase transition at a faster heating rate. The current version of the four-step model includes reversibility and accurately describes the small-scale radial experiment over a wide range of heating rates. We have observed impact-induced friction ignition of PBX 9501 with grit embedded between the explosive and the lower anvil surface. Observation was done using an infrared camera looking through the sapphire bottom anvil. Time to ignition and temperature-time behavior were recorded. The time to ignition was approximately 500 microseconds and the temperature was approximately 1000 K. The four step reversible kinetic scheme was previously validated for slow cook off scenarios. Our intention was to test the validity for significantly faster hot-spot processes, such as the impact-induced grit friction process studied here. We found the model predicted the ignition time within experimental error. There are caveats to consider when evaluating the agreement. The primary input to the model was friction work over an area computed by a stress analysis. The work rate itself, and the relative velocity of the grit and substrate both have a strong dependence on the initial position of the grit. Any errors in the analysis or the initial grit position would affect the model results. At this time, we do not know the sensitivity to these issues. However, the good agreement does suggest the four step kinetic scheme may have universal applicability for HMX systems.« less

  20. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    NASA Astrophysics Data System (ADS)

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  1. Waveform distortion by 2-step modeling ground vibration from trains

    NASA Astrophysics Data System (ADS)

    Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.

    2017-10-01

    The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.

  2. Design-for-manufacture of gradient-index optical systems using time-varying boundary condition diffusion

    NASA Astrophysics Data System (ADS)

    Harkrider, Curtis Jason

    2000-08-01

    The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.

  3. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  4. [Collaborative application of BEPS at different time steps.

    PubMed

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  5. Nuclear fusion during yeast mating occurs by a three-step pathway.

    PubMed

    Melloy, Patricia; Shen, Shu; White, Erin; McIntosh, J Richard; Rose, Mark D

    2007-11-19

    In Saccharomyces cerevisiae, mating culminates in nuclear fusion to produce a diploid zygote. Two models for nuclear fusion have been proposed: a one-step model in which the outer and inner nuclear membranes and the spindle pole bodies (SPBs) fuse simultaneously and a three-step model in which the three events occur separately. To differentiate between these models, we used electron tomography and time-lapse light microscopy of early stage wild-type zygotes. We observe two distinct SPBs in approximately 80% of zygotes that contain fused nuclei, whereas we only see fused or partially fused SPBs in zygotes in which the site of nuclear envelope (NE) fusion is already dilated. This demonstrates that SPB fusion occurs after NE fusion. Time-lapse microscopy of zygotes containing fluorescent protein tags that localize to either the NE lumen or the nucleoplasm demonstrates that outer membrane fusion precedes inner membrane fusion. We conclude that nuclear fusion occurs by a three-step pathway.

  6. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  7. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  8. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    NASA Astrophysics Data System (ADS)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  9. The joy of interactive modeling

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert

    2013-04-01

    The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.

  10. Finite cohesion due to chain entanglement in polymer melts.

    PubMed

    Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing

    2016-04-14

    Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.

  11. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  12. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  13. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  14. Sensitivity of The High-resolution Wam Model With Respect To Time Step

    NASA Astrophysics Data System (ADS)

    Kasemets, K.; Soomere, T.

    The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave heights over the whole Baltic Sea was 2.4 m (1 minute) and 2.04 m (3 minutes), respectively. The most probable reason of such difference is that the WAM model with a relatively large time step poorly describes wave field evolution in the Aland area with extremely ragged bottom topography and coastal line. In earlier studies, it has been reported that the WAM model frequently underestimates wave heights in the northern Baltic Proper by 20-30% in the case of strong north storms (Tuomi et al, Report series of the Finnish Institute of Marine Re- search, 1999). The described results suggest that a part of this underestimation may be removed through a proper choice of the time step.

  15. Steps wandering on the lysozyme and KDP crystals during growth in solution

    NASA Astrophysics Data System (ADS)

    Rashkovich, L. N.; Chernevich, T. G.; Gvozdev, N. V.; Shustin, O. A.; Yaminsky, I. V.

    2001-10-01

    We have applied atomic force microscopy for the study in solution of time evolution of step roughness on the crystal faces with high (pottasium dihydrophosphate: KDP) and low (lysozyme) density of kinks. It was found that the roughness increases with time revealing the time dependence as t1/4. Step velocity does not depend upon distance between steps, that is why the experimental data were interpreted on the basis of Voronkov theory, which assume, that the attachment and detachment of building units in the kinks is major limitation for crystal growth. In the frame of this theoretical model the calculation of material parameters is performed.

  16. Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Matinelli, L.

    1994-01-01

    The steady state solution of the system of equations consisting of the full Navier-Stokes equations and two turbulence equations has been obtained using a multigrid strategy of unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time-stepping scheme with a stability-bound local time step, while turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positivity. Low-Reynolds-number modifications to the original two-equation model are incorporated in a manner which results in well-behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved, initializing all quantities with uniform freestream values. Rapid and uniform convergence rates for the flow and turbulence equations are observed.

  17. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  18. Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C; Lin, T; Caflisch, R

    2007-05-22

    The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  19. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less

  20. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    PubMed

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P < 0.01), which was not significant higher correlation than TUG test time. We showed which TUG test parameters were associated with each motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  1. A model-free method for mass spectrometer response correction. [for oxygen consumption and cardiac output calculation

    NASA Technical Reports Server (NTRS)

    Shykoff, Barbara E.; Swanson, Harvey T.

    1987-01-01

    A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.

  2. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2016-01-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.

  3. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2015-07-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.

  4. Time-dependent rheological behavior of natural polysaccharide xanthan gum solutions in interrupted shear and step-incremental/reductional shear flow fields

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Seok; Song, Ki-Won

    2015-11-01

    The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.

  5. LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite

    NASA Astrophysics Data System (ADS)

    Crowley, Kevin D.

    1993-05-01

    The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.

  6. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    NASA Astrophysics Data System (ADS)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  7. A New Insight into the Mechanism of NADH Model Oxidation by Metal Ions in Non-Alkaline Media.

    PubMed

    Yang, Jin-Dong; Chen, Bao-Long; Zhu, Xiao-Qing

    2018-06-11

    For a long time, it has been controversial that the three-step (e-H+-e) or two-step (e-H•) mechanism was used for the oxidations of NADH and its models by metal ions in non-alkaline media. The latter mechanism has been accepted by the majority of researchers. In this work, 1-benzyl-1,4-dihydronicotinamide (BNAH) and 1-phenyl-l,4-dihydronicotinamide (PNAH) are used as NADH models, and ferrocenium (Fc+) metal ion as an electron acceptor. The kinetics for oxidations of the NADH models by Fc+ in pure acetonitrile were monitored by using UV-Vis absorption and quadratic relationship between of kobs and the concentrations of NADH models were found for the first time. The rate expression of the reactions developed according to the three-step mechanism is quite consistent with the quadratic curves. The rate constants, thermodynamic driving forces and KIEs of each elementary step for the reactions were estimated. All the results supported the three-step mechanism. The intrinsic kinetic barriers of the proton transfer from BNAH+• to BNAH and the hydrogen atom transfer from BNAH+• to BNAH+• were estimated, the results showed that the former is 11.8 kcal/mol, and the latter is larger than 24.3 kcal/mol. It is the large intrinsic kinetic barrier of the hydrogen atom transfer that makes the reactions choose the three-step rather than two-step mechanism. Further investigation of the factors affecting the intrinsic kinetic barrier of chemical reactions indicated that the large intrinsic kinetic barrier of the hydrogen atom transfer originated from the repulsion of positive charges between BNAH+• and BNAH+•. The greatest contribution of this work is the discovery of the quadratic dependence of kobs on the concentrations of the NADH models, which is inconsistent with the conventional viewpoint of the "two-step mechanism" on the oxidations of NADH and its models by metal ions in the non-alkaline media.

  8. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  9. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE PAGES

    Xiao, Heng; Endo, Satoshi; Wong, May; ...

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  10. Examples of Linking Codes Within GeoFramework

    NASA Astrophysics Data System (ADS)

    Tan, E.; Choi, E.; Thoutireddy, P.; Aivazis, M.; Lavier, L.; Quenette, S.; Gurnis, M.

    2003-12-01

    Geological processes usually encompass a broad spectrum of length and time scales. Traditionally, a modeling code (solver) is written to solve a problem with specific length and time scales in mind. The utility of the solver beyond the designated purpose is usually limited. Furthermore, two distinct solvers, even if each can solve complementary parts of a new problem, are difficult to link together to solve the problem as a whole. For example, Lagrangian deformation model with visco-elastoplastic crust is used to study deformation near plate boundary. Ideally, the driving force of the deformation should be derived from underlying mantle convection, and it requires linking the Lagrangian deformation model with a Eulerian mantle convection model. As our understanding of geological processes evolves, the need of integrated modeling codes, which should reuse existing codes as much as possible, begins to surface. GeoFramework project addresses this need by developing a suite of reusable and re-combinable tools for the Earth science community. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. Under the framework, a solver is aware of the existence of other solvers and can interact with each other via exchanging information across adjacent boundary. A solver needs to conform a standard interface and provide its own implementation for exchanging boundary information. The framework also provides facilities to control the coordination between interacting solvers. We will show an example of linking two solvers within GeoFramework. CitcomS is a finite element code which solves for thermal convection within a 3D spherical shell. CitcomS can solve for problems either within a full spherical (global) domain or a restricted (regional) domain of a full sphere by using different meshers. We can embed a regional CitcomS solver within a global CitcomS solver. We not that linking instances of the same solver is conceptually equivalent to linking to different solvers. The global solver has a coarser grid and a longer stable time step than the regional solver. Therefore, a global-solver time step consists of several regional-solver time steps. The time-marching scheme is described below. First, the global solver is advanced one global-solver time step. Then, the regional solver is advanced for several regional-solver time steps until it catches up global solver. Within each regional-solver time step, the velocity field of the global solver is interpolated in time and then is imposed to the regional solver as boundary conditions. Finally, the temperature field of the regional solver is extrapolated in space and is fed back to the global. These two solvers are linked and synchronized by the time-marching scheme. An effort to embed a visco-elastoplastic representation of the crust within viscous mantle flow is underway.

  11. Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Martinelli, L.

    1991-01-01

    The system of equations consisting of the full Navier-Stokes equations and two turbulence equations was solved for in the steady state using a multigrid strategy on unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time stepping scheme with a stability bound local time step, while the turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positively. Low Reynolds number modifications to the original two equation model are incorporated in a manner which results in well behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved for, initializing all quantities with uniform freestream values, and resulting in rapid and uniform convergence rates for the flow and turbulence equations.

  12. Web processing service for landslide hazard assessment

    NASA Astrophysics Data System (ADS)

    Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.

    2012-04-01

    Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created and it can be downloaded from the server with the log file

  13. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  14. Fractal analysis of lateral movement in biomembranes.

    PubMed

    Gmachowski, Lech

    2018-04-01

    Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.

  15. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  16. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  17. Solvable continuous-time random walk model of the motion of tracer particles through porous media.

    PubMed

    Fouxon, Itzhak; Holzner, Markus

    2016-08-01

    We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.

  18. Effect of nucleation on instability of step meandering during step-flow growth on vicinal 3C-SiC (0001) surfaces

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Chen, Xuejiang; Su, Juan

    2017-06-01

    A three-dimensional kinetic Monte Carlo (KMC) model has been developed to study the step instability caused by nucleation during the step-flow growth of 3C-SiC. In the model, a lattice mesh was established to fix the position of atoms and bond partners based on the crystal lattice of 3C-SiC. The events considered in the model were adsorption and diffusion of adatoms on the terraces, attachment, detachment and interlayer transport of adatoms at the step edges, and nucleation of adatoms. Then the effects of nucleation on the instability of step meandering and the coalescence of both islands and steps were simulated by the model. The results showed that the instability of step meandering caused by nucleation was affected by the growth temperature. And the effects of nucleation on the instability was also analyzed. Moreover, the surface roughness as a function of time for different temperatures was discussed. Finally, a phase diagram was presented to predict in which conditions the effects of nucleation on step meandering become significant and the three different regimes, the step-flow (SF), 2D nucleation (2DN), and 3D layer by layer (3DLBL) were determined.

  19. Modeling laser-plasma acceleration in the laboratory frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    A simulation of laser-plasma acceleration in the laboratory frame. Both the laser and the wakefield buckets must be resolved over the entire domain of the plasma, requiring many cells and many time steps. While researchers often use a simulation window that moves with the pulse, this reduces only the multitude of cells, not the multitude of time steps. For an artistic impression of how to solve the simulation by using the boosted-frame method, watch the video "Modeling laser-plasma acceleration in the wakefield frame".

  20. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  1. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  2. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  3. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  4. Zone clearance in an infinite TASEP with a step initial condition

    NASA Astrophysics Data System (ADS)

    Cividini, Julien; Appert-Rolland, Cécile

    2017-06-01

    The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.

  5. United States Medical Licensing Examination and American Board of Pediatrics Certification Examination Results: Does the Residency Program Contribute to Trainee Achievement.

    PubMed

    Welch, Thomas R; Olson, Brad G; Nelsen, Elizabeth; Beck Dallaghan, Gary L; Kennedy, Gloria A; Botash, Ann

    2017-09-01

    To determine whether training site or prior examinee performance on the US Medical Licensing Examination (USMLE) step 1 and step 2 might predict pass rates on the American Board of Pediatrics (ABP) certifying examination. Data from graduates of pediatric residency programs completing the ABP certifying examination between 2009 and 2013 were obtained. For each, results of the initial ABP certifying examination were obtained, as well as results on National Board of Medical Examiners (NBME) step 1 and step 2 examinations. Hierarchical linear modeling was used to nest first-time ABP results within training programs to isolate program contribution to ABP results while controlling for USMLE step 1 and step 2 scores. Stepwise linear regression was then used to determine which of these examinations was a better predictor of ABP results. A total of 1110 graduates of 15 programs had complete testing results and were subject to analysis. Mean ABP scores for these programs ranged from 186.13 to 214.32. The hierarchical linear model suggested that the interaction of step 1 and 2 scores predicted ABP performance (F[1,1007.70] = 6.44, P = .011). By conducting a multilevel model by training program, both USMLE step examinations predicted first-time ABP results (b = .002, t = 2.54, P = .011). Linear regression analyses indicated that step 2 results were a better predictor of ABP performance than step 1 or a combination of the two USMLE scores. Performance on the USMLE examinations, especially step 2, predicts performance on the ABP certifying examination. The contribution of training site to ABP performance was statistically significant, though contributed modestly to the effect compared with prior USMLE scores. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Cognitive and emotional factors associated with elective breast augmentation among young women.

    PubMed

    Moser, Stephanie E; Aiken, Leona S

    2011-01-01

    The purpose of this research was to propose and evaluate a psychosocial model of young women's intentions to obtain breast implants and the preparatory steps taken towards having breast implant surgery. The model integrated anticipated regret, descriptive norms and image norms from the media into the theory of planned behaviour (TPB). Focus groups (n = 58) informed development of measures of outcome expectancies, preparatory steps and normative influence. The model was tested and replicated among two samples of young women who had ever considered getting breast implants (n = 200, n = 152). Intentions and preparatory steps served as outcomes. Model constructs and outcomes were initially assessed; outcomes were re-assessed 11 weeks later. Evaluative attitudes and anticipated regret predicted intentions; in turn, intentions, along with descriptive norms, predicted subsequent preparatory steps. Perceived risk (susceptibility, severity) of negative medical consequences of breast implants predicted anticipated regret, which predicted evaluative attitudes. Intentions and preparatory steps exhibited interplay over time. This research provides the first comprehensive model predicting intentions and preparatory steps towards breast augmentation surgery. It supports the addition of anticipated regret to the TPB and suggests mutual influence between intentions and preparatory steps towards a final behavioural outcome.

  7. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  8. Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less

  9. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  10. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  11. Asymmetry in Determinants of Running Speed During Curved Sprinting.

    PubMed

    Ishimura, Kazuhiro; Sakurai, Shinji

    2016-08-01

    This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.

  12. A permeation theory for single-file ion channels: one- and two-step models.

    PubMed

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.

  13. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  14. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    NASA Astrophysics Data System (ADS)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  15. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  16. Optimization of airport security lanes

    NASA Astrophysics Data System (ADS)

    Chen, Lin

    2018-05-01

    Current airport security management system is widely implemented all around the world to ensure the safety of passengers, but it might not be an optimum one. This paper aims to seek a better security system, which can maximize security while minimize inconvenience to passengers. Firstly, we apply Petri net model to analyze the steps where the main bottlenecks lie. Based on average tokens and time transition, the most time-consuming steps of security process can be found, including inspection of passengers' identification and documents, preparing belongings to be scanned and the process for retrieving belongings back. Then, we develop a queuing model to figure out factors affecting those time-consuming steps. As for future improvement, the effective measures which can be taken include transferring current system as single-queuing and multi-served, intelligently predicting the number of security checkpoints supposed to be opened, building up green biological convenient lanes. Furthermore, to test the theoretical results, we apply some data to stimulate the model. And the stimulation results are consistent with what we have got through modeling. Finally, we apply our queuing model to a multi-cultural background. The result suggests that by quantifying and modifying the variance in wait time, the model can be applied to individuals with various habits customs and habits. Generally speaking, our paper considers multiple affecting factors, employs several models and does plenty of calculations, which is practical and reliable for handling in reality. In addition, with more precise data available, we can further test and improve our models.

  17. Model Predictive Control-based gait pattern generation for wearable exoskeletons.

    PubMed

    Wang, Letian; van Asseldonk, Edwin H F; van der Kooij, Herman

    2011-01-01

    This paper introduces a new method for controlling wearable exoskeletons that do not need predefined joint trajectories. Instead, it only needs basic gait descriptors such as step length, swing duration, and walking speed. End point Model Predictive Control (MPC) is used to generate the online joint trajectories based on these gait parameters. Real-time ability and control performance of the method during the swing phase of gait cycle is studied in this paper. Experiments are performed by helping a human subject swing his leg with different patterns in the LOPES gait trainer. Results show that the method is able to assist subjects to make steps with different step length and step duration without predefined joint trajectories and is fast enough for real-time implementation. Future study of the method will focus on controlling the exoskeletons in the entire gait cycle. © 2011 IEEE

  18. Combining fast walking training and a step activity monitoring program to improve daily walking activity after stroke: a preliminary study

    PubMed Central

    Danks, Kelly A.; Pohlig, Ryan; Reisman, Darcy S.

    2016-01-01

    Objective To determine preliminary efficacy and to identify baseline characteristics predicting who would benefit most from fast walking training plus a step activity monitoring program (FAST+SAM) compared to fast walking training alone (FAST) in persons with chronic stroke. Design Randomized controlled trial with blinded assessors Setting Outpatient clinical research laboratory Participants 37 individuals greater than 6 months post-stroke. Interventions Subjects were assigned to either FAST which was walking training at their fastest possible speed on the treadmill (30 minutes) and over ground 3 times/week for 12 weeks or FAST plus a step activity monitoring program (FAST+SAM). The step activity monitoring program consisted of daily step monitoring with a StepWatch Activity monitor, goal setting, and identification of barriers to activity and strategies to overcome barriers. Main Outcome Measures Daily step activity metrics (steps/day, time walking/day), walking speed and six minute walk test distance (6MWT). Results There was a significant effect of time for both groups with all outcomes improving from pre to post-training, (all p<0.05). The FAST+SAM was superior to FAST for 6MWT (p=0.018), with a larger increase in the FAST+SAM group. The interventions had differential effectiveness based on baseline step activity. Sequential moderated regression models demonstrated that for subjects with baseline levels of step activity and 6MWT distances that were below the mean, the FAST+SAM intervention was more effective than FAST (1715±1584 vs. 254±933 steps/day, respectively; p<0.05 for overall model and ΔR2 for steps/day and 6MWT). Conclusions The addition of a step activity monitoring program to a fast walking training intervention may be most effective in persons with chronic stroke that have initial low levels of walking endurance and activity. Regardless of baseline performance, the FAST + SAM intervention was more effective for improving walking endurance. PMID:27240430

  19. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  20. Construction of moment-matching multinomial lattices using Vandermonde matrices and Gröbner bases

    NASA Astrophysics Data System (ADS)

    Lundengârd, Karl; Ogutu, Carolyne; Silvestrov, Sergei; Ni, Ying; Weke, Patrick

    2017-01-01

    In order to describe and analyze the quantitative behavior of stochastic processes, such as the process followed by a financial asset, various discretization methods are used. One such set of methods are lattice models where a time interval is divided into equal time steps and the rate of change for the process is restricted to a particular set of values in each time step. The well-known binomial- and trinomial models are the most commonly used in applications, although several kinds of higher order models have also been examined. Here we will examine various ways of designing higher order lattice schemes with different node placements in order to guarantee moment-matching with the process.

  1. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  2. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  3. A General Approach to Defining Latent Growth Components

    ERIC Educational Resources Information Center

    Mayer, Axel; Steyer, Rolf; Mueller, Horst

    2012-01-01

    We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…

  4. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-02-29

    Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal

  5. Numerical modeling of solar irradiance on earth's surface

    NASA Astrophysics Data System (ADS)

    Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.

    2016-05-01

    Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.

  6. The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations

    DTIC Science & Technology

    2011-12-01

    modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism

  7. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  8. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  9. Systems analysis of the vestibulo-ocular system. [mathematical model of vestibularly driven head and eye movements

    NASA Technical Reports Server (NTRS)

    Schmid, R. M.

    1973-01-01

    The vestibulo-ocular system is examined from the standpoint of system theory. The evolution of a mathematical model of the vestibulo-ocular system in an attempt to match more and more experimental data is followed step by step. The final model explains many characteristics of the eye movement in vestibularly induced nystagmus. The analysis of the dynamic behavior of the model at the different stages of its development is illustrated in time domain, mainly in a qualitative way.

  10. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  11. Comparison of Stepped Care Delivery Against a Single, Empirically Validated Cognitive-Behavioral Therapy Program for Youth With Anxiety: A Randomized Clinical Trial.

    PubMed

    Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine

    2017-10-01

    Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. Anomalous diffusion with linear reaction dynamics: from continuous time random walks to fractional reaction-diffusion equations.

    PubMed

    Henry, B I; Langlands, T A M; Wearne, S L

    2006-09-01

    We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.

  13. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  16. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  17. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  18. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  19. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  20. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  1. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  2. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  3. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  4. Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fernando

    2004-01-01

    The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.

  5. Assessing dose–response effects of national essential medicine policy in China: comparison of two methods for handling data with a stepped wedge-like design and hierarchical structure

    PubMed Central

    Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun

    2017-01-01

    Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510

  6. Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Comiskey, J. G.

    1979-01-01

    This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.

  7. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  8. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers

    NASA Astrophysics Data System (ADS)

    Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun

    2015-12-01

    Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.

  9. An Idealized Test of the Response of the Community Atmosphere Model to Near-Grid-Scale Forcing Across Hydrostatic Resolutions

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Reed, K. A.

    2018-02-01

    A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.

  10. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  11. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  12. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    PubMed

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all <15%, averagely 12.930%. To the best of our knowledge, it is the first time that LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  13. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  14. Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments

    NASA Astrophysics Data System (ADS)

    Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon

    2013-04-01

    A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.

  15. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  16. Transport and dielectric properties of water and the influence of coarse-graining: Comparing BMW, SPC/E, and TIP3P models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, Daniel; Boresch, Stefan; Steinhauser, Othmar

    Long-term molecular dynamics simulations are used to compare the single particle dipole reorientation time, the diffusion constant, the viscosity, and the frequency-dependent dielectric constant of the coarse-grained big multipole water (BMW) model to two common atomistic three-point water models, SPC/E and TIP3P. In particular, the agreement between the calculated viscosity of BMW and the experimental viscosity of water is satisfactory. We also discuss contradictory values for the static dielectric properties reported in the literature. Employing molecular hydrodynamics, we show that the viscosity can be computed from single particle dynamics, circumventing the slow convergence of the standard approaches. Furthermore, our datamore » indicate that the Kivelson relation connecting single particle and collective reorientation time holds true for all systems investigated. Since simulations with coarse-grained force fields often employ extremely large time steps, we also investigate the influence of time step on dynamical properties. We observe a systematic acceleration of system dynamics when increasing the time step. Carefully monitoring energy/temperature conservation is found to be a sufficient criterion for the reliable calculation of dynamical properties. By contrast, recommended criteria based on the ratio of fluctuations of total vs. kinetic energy are not sensitive enough.« less

  17. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    DOE PAGES

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.« less

  18. Simplified jet fuel reaction mechanism for lean burn combustion application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman

    1993-01-01

    Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. Detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.

  19. Mass imbalances in EPANET water-quality simulations

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    2018-04-01

    EPANET is widely employed to simulate water quality in water distribution systems. However, in general, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results only for short water-quality time steps. Overly long time steps can yield errors in concentration estimates and can result in situations in which constituent mass is not conserved. The use of a time step that is sufficiently short to avoid these problems may not always be feasible. The absence of EPANET errors or warnings does not ensure conservation of mass. This paper provides examples illustrating mass imbalances and explains how such imbalances can occur because of fundamental limitations in the water-quality routing algorithm used in EPANET. In general, these limitations cannot be overcome by the use of improved water-quality modeling practices. This paper also presents a preliminary event-driven approach that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, toward those obtained using the preliminary event-driven approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations. The results presented in this paper should be of value to those who perform water-quality simulations using EPANET or use the results of such simulations, including utility managers and engineers.

  20. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  1. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  2. Willingness-to-pay for steelhead trout fishing: Implications of two-step consumer decisions with short-run endowments

    NASA Astrophysics Data System (ADS)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2010-09-01

    Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.

  3. Multiphysics modelling of the separation of suspended particles via frequency ramping of ultrasonic standing waves.

    PubMed

    Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai

    2013-03-01

    A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  4. Personal computer study of finite-difference methods for the transonic small disturbance equation

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1989-01-01

    Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.

  5. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred

    2016-03-01

    Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  6. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators.

    PubMed

    Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba

    2017-01-01

    The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.

  7. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  8. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  9. A new algorithm for automatic Outlier Detection in GPS Time Series

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Mattia, Mario; Rossi, Massimo; Palano, Mimmo; Bruno, Valentina

    2010-05-01

    Nowadays continuous GPS time series are considered a crucial product of GPS permanent networks, useful in many geo-science fields, such as active tectonics, seismology, crustal deformation and volcano monitoring (Altamimi et al. 2002, Elósegui et al. 2006, Aloisi et al. 2009). Although the GPS data elaboration software has increased in reliability, the time series are still affected by different kind of noise, from the intrinsic noise (e.g. thropospheric delay) to the un-modeled noise (e.g. cycle slips, satellite faults, parameters changing). Typically GPS Time Series present characteristic noise that is a linear combination of white noise and correlated colored noise, and this characteristic is fractal in the sense that is evident for every considered time scale or sampling rate. The un-modeled noise sources result in spikes, outliers and steps. These kind of errors can appreciably influence the estimation of velocities of the monitored sites. The outlier detection in generic time series is a widely treated problem in literature (Wei, 2005), while is not fully developed for the specific kind of GPS series. We propose a robust automatic procedure for cleaning the GPS time series from the outliers and, especially for long daily series, steps due to strong seismic or volcanic events or merely instrumentation changing such as antenna and receiver upgrades. The procedure is basically divided in two steps: a first step for the colored noise reduction and a second step for outlier detection through adaptive series segmentation. Both algorithms present novel ideas and are nearly unsupervised. In particular, we propose an algorithm to estimate an autoregressive model for colored noise in GPS time series in order to subtract the effect of non Gaussian noise on the series. This step is useful for the subsequent step (i.e. adaptive segmentation) which requires the hypothesis of Gaussian noise. The proposed algorithms are tested in a benchmark case study and the results confirm that the algorithms are effective and reasonable. Bibliography - Aloisi M., A. Bonaccorso, F. Cannavò, S. Gambino, M. Mattia, G. Puglisi, E. Boschi, A new dyke intrusion style for the Mount Etna May 2008 eruption modelled through continuous tilt and GPS data, Terra Nova, Volume 21 Issue 4 , Pages 316 - 321, doi: 10.1111/j.1365-3121.2009.00889.x (August 2009) - Altamimi Z., Sillard P., Boucher C., ITRF2000: A new release of the International Terrestrial Reference frame for earth science applications, J Geophys Res-Solid Earth, 107 (B10): art. no.-2214, (Oct 2002) - Elósegui, P., J. L. Davis, D. Oberlander, R. Baena, and G. Ekström , Accuracy of high-rate GPS for seismology, Geophys. Res. Lett., 33, L11308, doi:10.1029/2006GL026065 (2006) - Wei W. S., Time Series Analysis: Univariate and Multivariate Methods, Addison Wesley (2 edition), ISBN-10: 0321322169 (July, 2005)

  10. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  11. Modeling of Flow about Pitching and Plunging Airfoil Using High-Order Schemes

    DTIC Science & Technology

    2008-03-13

    response, including the time for re intaini data needed, and completing and reviewing this collection of information. Send comments regarding this burden...and compared with available experimental data including lift force for plunging NACA0012 airfoil and visualization of vortical flowfield for plunging...time step m to time step m+I as follows f+nl = fn +b ’H, (28) H, = a, H,_-, + dtu , (29) where n refers to the stage number. The value off at the final

  12. Occupational Physical Activity Habits of UK Office Workers: Cross-Sectional Data from the Active Buildings Study.

    PubMed

    Smith, Lee; Sawyer, Alexia; Gardner, Benjamin; Seppala, Katri; Ucci, Marcella; Marmot, Alexi; Lally, Pippa; Fisher, Abi

    2018-06-09

    Habitual behaviours are learned responses that are triggered automatically by associated environmental cues. The unvarying nature of most workplace settings makes workplace physical activity a prime candidate for a habitual behaviour, yet the role of habit strength in occupational physical activity has not been investigated. Aims of the present study were to: (i) document occupational physical activity habit strength; and (ii) investigate associations between occupational activity habit strength and occupational physical activity levels. A sample of UK office-based workers ( n = 116; 53% female, median age 40 years, SD 10.52) was fitted with activPAL accelerometers worn for 24 h on five consecutive days, providing an objective measure of occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. A self-report index measured the automaticity of two occupational physical activities (“being active” (e.g., walking to printers and coffee machines) and “stair climbing”). Adjusted linear regression models investigated the association between occupational activity habit strength and objectively-measured occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. Eighty-one per cent of the sample reported habits for “being active”, and 62% reported habits for “stair climbing”. In adjusted models, reported habit strength for “being active” were positively associated with average occupational sit-to-stand transitions per hour (B = 0.340, 95% CI: 0.053 to 0.627, p = 0.021). “Stair climbing” habit strength was unexpectedly negatively associated with average hourly stepping time (B = −0.01, 95% CI: −0.01 to −0.00, p = 0.006) and average hourly occupational step count (B = −38.34, 95% CI: −72.81 to −3.88, p = 0.030), which may reflect that people with stronger stair-climbing habits compensate by walking fewer steps overall. Results suggest that stair-climbing and office-based occupational activity can be habitual. Interventions might fruitfully promote habitual workplace activity, although, in light of potential compensation effects, such interventions should perhaps focus on promoting moderate-intensity activity.

  13. Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge

    NASA Astrophysics Data System (ADS)

    Hidetsugu Sakaguchi,; Sahim M. Kourkouss,

    2010-06-01

    We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.

  14. The importance of age composition of 12-step meetings as a moderating factor in the relation between young adults' 12-step participation and abstinence.

    PubMed

    Labbe, Allison K; Greene, Claire; Bergman, Brandon G; Hoeppner, Bettina; Kelly, John F

    2013-12-01

    Participation in 12-step mutual help organizations (MHO) is a common continuing care recommendation for adults; however, little is known about the effects of MHO participation among young adults (i.e., ages 18-25 years) for whom the typically older age composition at meetings may serve as a barrier to engagement and benefits. This study examined whether the age composition of 12-step meetings moderated the recovery benefits derived from attending MHOs. Young adults (n=302; 18-24 years; 26% female; 94% White) enrolled in a naturalistic study of residential treatment effectiveness were assessed at intake, and 3, 6, and 12 months later on 12-step attendance, age composition of attended 12-step groups, and treatment outcome (Percent Days Abstinent [PDA]). Hierarchical linear models (HLM) tested the moderating effect of age composition on PDA concurrently and in lagged models controlling for confounds. A significant three-way interaction between attendance, age composition, and time was detected in the concurrent (p=0.002), but not lagged, model (b=0.38, p=0.46). Specifically, a similar age composition was helpful early post-treatment among low 12-step attendees, but became detrimental over time. Treatment and other referral agencies might enhance the likelihood of successful remission and recovery among young adults by locating and initially linking such individuals to age appropriate groups. Once engaged, however, it may be prudent to encourage gradual integration into the broader mixed-age range of 12-step meetings, wherein it is possible that older members may provide the depth and length of sober experience needed to carry young adults forward into long-term recovery. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less

  16. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  17. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  18. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  19. An online-coupled NWP/ACT model with conserved Lagrangian levels

    NASA Astrophysics Data System (ADS)

    Sørensen, B.; Kaas, E.; Lauritzen, P. H.

    2012-04-01

    Numerical weather and climate modelling is under constant development. Semi-implicit semi-Lagrangian (SISL) models have proven to be numerically efficient in both short-range weather forecasts and climate models, due to the ability to use long time steps. Chemical/aerosol feedback mechanism are becoming more and more relevant in NWP as well as climate models, since the biogenic and anthropogenic emissions can have a direct effect on the dynamics and radiative properties of the atmosphere. To include chemical feedback mechanisms in the NWP models, on-line coupling is crucial. In 3D semi-Lagrangian schemes with quasi-Lagrangian vertical coordinates the Lagrangian levels are remapped to Eulerian model levels each time step. This remapping introduces an undesirable tendency to smooth sharp gradients and creates unphysical numerical diffusion in the vertical distribution. A semi-Lagrangian advection method is introduced, it combines an inherently mass conserving 2D semi-Lagrangian scheme, with a SISL scheme employing both hybrid vertical coordinates and a fully Lagrangian vertical coordinate. This minimizes the vertical diffusion and thus potentially improves the simulation of the vertical profiles of moisture, clouds, and chemical constituents. Since the Lagrangian levels suffer from traditional Lagrangian limitations caused by the convergence and divergence of the flow, remappings to the Eulerian model levels are generally still required - but this need only be applied after a number of time steps - unless dynamic remapping methods are used. For this several different remapping methods has been implemented. The combined scheme is mass conserving, consistent, and multi-tracer efficient.

  20. Development of annualized diameter and height growth equations for red alder: preliminary results.

    Treesearch

    Aaron Weiskittel; Sean M. Garber; Greg Johnson; Doug Maguire; Robert A. Monserud

    2006-01-01

    Most individual-tree based growth and yield models use a 5- to 10-year time step, which can make projections for a fast-growing species like red alder quite difficult. Further, it is rather cumbersome to simulate the effects of intensive silvicultural treatments such as thinning or pruning on a time step longer than one year given the highly dynamic nature of growth...

  1. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  2. Look and Feel: Haptic Interaction for Biomedicine

    DTIC Science & Technology

    1995-10-01

    algorithm that is evaluated within the topology of the model. During each time step, forces are summed for each mobile atom based on external forces...volumetric properties; (b) conserving computation power by rendering media local to the interaction point; and (c) evaluating the simulation within...alteration of the model topology. Simulation of the DSM state is accomplished by a multi-step algorithm that is evaluated within the topology of the

  3. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  4. Fast and slow responses of Southern Ocean sea surface temperature to SAM in coupled climate models

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Marshall, John; Hausmann, Ute; Armour, Kyle C.; Ferreira, David; Holland, Marika M.

    2017-03-01

    We investigate how sea surface temperatures (SSTs) around Antarctica respond to the Southern Annular Mode (SAM) on multiple timescales. To that end we examine the relationship between SAM and SST within unperturbed preindustrial control simulations of coupled general circulation models (GCMs) included in the Climate Modeling Intercomparison Project phase 5 (CMIP5). We develop a technique to extract the response of the Southern Ocean SST (55°S-70°S) to a hypothetical step increase in the SAM index. We demonstrate that in many GCMs, the expected SST step response function is nonmonotonic in time. Following a shift to a positive SAM anomaly, an initial cooling regime can transition into surface warming around Antarctica. However, there are large differences across the CMIP5 ensemble. In some models the step response function never changes sign and cooling persists, while in other GCMs the SST anomaly crosses over from negative to positive values only 3 years after a step increase in the SAM. This intermodel diversity can be related to differences in the models' climatological thermal ocean stratification in the region of seasonal sea ice around Antarctica. Exploiting this relationship, we use observational data for the time-mean meridional and vertical temperature gradients to constrain the real Southern Ocean response to SAM on fast and slow timescales.

  5. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  6. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE PAGES

    Chen, Bo; Chen, Chen; Wang, Jianhui; ...

    2017-07-07

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  7. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bo; Chen, Chen; Wang, Jianhui

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  8. Reconstructing biochemical pathways from time course data.

    PubMed

    Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago

    2007-03-01

    Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.

  9. Development, validation and operating room-transfer of a six-step laparoscopic training program for the vesicourethral anastomosis.

    PubMed

    Klein, Jan; Teber, Dogu; Frede, Tom; Stock, Christian; Hruza, Marcel; Gözen, Ali; Seemann, Othmar; Schulze, Michael; Rassweiler, Jens

    2013-03-01

    Development and full validation of a laparoscopic training program for stepwise learning of a reproducible application of a standardized laparoscopic anastomosis technique and integration into the clinical course. The training of vesicourethral anastomosis (VUA) was divided into six simple standardized steps. To fix the objective criteria, four experienced surgeons performed the stepwise training protocol. Thirty-eight participants with no previous laparoscopic experience were investigated in their training performance. The times needed to manage each training step and the total training time were recorded. The integration into the clinical course was investigated. The training results and the corresponding steps during laparoscopic radical prostatectomy (LRP) were analyzed. Data analysis of corresponding operating room (OR) sections of 793 LRP was performed. Based on the validity, criteria were determined. In the laboratory section, a significant reduction of OR time for every step was seen in all participants. Coordination: 62%; longitudinal incision: 52%; inverted U-shape incision: 43%; plexus: 47%. Anastomosis catheter model: 38%. VUA: 38%. The laboratory section required a total time of 29 hours (minimum: 16 hours; maximum: 42 hours). All participants had shorter execution times in the laboratory than under real conditions. The best match was found within the VUA model. To perform an anastomosis under real conditions, 25% more time was needed. By using the training protocol, the performance of the VUA is comparable to that of an surgeon with experience of about 50 laparoscopic VUA. Data analysis proved content, construct, and prognostic validity. The use of stepwise training approaches enables a surgeon to learn and reproduce complex reconstructive surgical tasks: eg, the VUA in a safe environment. The validity of the designed system is given at all levels and should be used as a standard in the clinical surgical training in laparoscopic reconstructive urology.

  10. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul; Einstein, T. L.; Margetis, Dionisios

    2011-03-01

    We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.

  11. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    NASA Astrophysics Data System (ADS)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  12. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks.

    PubMed

    Chang, Li-Chiu; Chen, Pin-An; Chang, Fi-John

    2012-08-01

    A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.

  13. Exploring inductive linearization for pharmacokinetic-pharmacodynamic systems of nonlinear ordinary differential equations.

    PubMed

    Hasegawa, Chihiro; Duffull, Stephen B

    2018-02-01

    Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.

  14. Understanding Satellite Characterization Knowledge Gained from Radiometric Data

    DTIC Science & Technology

    2011-09-01

    observation model, the time - resolved pose of a satellite can be estimated autonomously through each pass from non- resolved radiometry. The benefits of...and we assume the satellite can achieve both the set attitude and the necessary maneuver to change its orientation from one time -step to the next...Observation Model The UKF observation model uses the Time domain Analysis Simulation for Advanced Tracking (TASAT) software to provide high-fidelity satellite

  15. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  16. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  17. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  18. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  19. Field evaluation of a random forest activity classifier for wrist-worn accelerometer data.

    PubMed

    Pavey, Toby G; Gilson, Nicholas D; Gomersall, Sjaan R; Clark, Bronwyn; Trost, Stewart G

    2017-01-01

    Wrist-worn accelerometers are convenient to wear and associated with greater wear-time compliance. Previous work has generally relied on choreographed activity trials to train and test classification models. However, validity in free-living contexts is starting to emerge. Study aims were: (1) train and test a random forest activity classifier for wrist accelerometer data; and (2) determine if models trained on laboratory data perform well under free-living conditions. Twenty-one participants (mean age=27.6±6.2) completed seven lab-based activity trials and a 24h free-living trial (N=16). Participants wore a GENEActiv monitor on the non-dominant wrist. Classification models recognising four activity classes (sedentary, stationary+, walking, and running) were trained using time and frequency domain features extracted from 10-s non-overlapping windows. Model performance was evaluated using leave-one-out-cross-validation. Models were implemented using the randomForest package within R. Classifier accuracy during the 24h free living trial was evaluated by calculating agreement with concurrently worn activPAL monitors. Overall classification accuracy for the random forest algorithm was 92.7%. Recognition accuracy for sedentary, stationary+, walking, and running was 80.1%, 95.7%, 91.7%, and 93.7%, respectively for the laboratory protocol. Agreement with the activPAL data (stepping vs. non-stepping) during the 24h free-living trial was excellent and, on average, exceeded 90%. The ICC for stepping time was 0.92 (95% CI=0.75-0.97). However, sensitivity and positive predictive values were modest. Mean bias was 10.3min/d (95% LOA=-46.0 to 25.4min/d). The random forest classifier for wrist accelerometer data yielded accurate group-level predictions under controlled conditions, but was less accurate at identifying stepping verse non-stepping behaviour in free living conditions Future studies should conduct more rigorous field-based evaluations using observation as a criterion measure. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. A novel grid-based mesoscopic model for evacuation dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Meng; Lee, Eric Wai Ming; Ma, Yi

    2018-05-01

    This study presents a novel grid-based mesoscopic model for evacuation dynamics. In this model, the evacuation space is discretised into larger cells than those used in microscopic models. This approach directly computes the dynamic changes crowd densities in cells over the course of an evacuation. The density flow is driven by the density-speed correlation. The computation is faster than in traditional cellular automata evacuation models which determine density by computing the movements of each pedestrian. To demonstrate the feasibility of this model, we apply it to a series of practical scenarios and conduct a parameter sensitivity study of the effect of changes in time step δ. The simulation results show that within the valid range of δ, changing δ has only a minor impact on the simulation. The model also makes it possible to directly acquire key information such as bottleneck areas from a time-varied dynamic density map, even when a relatively large time step is adopted. We use the commercial software AnyLogic to evaluate the model. The result shows that the mesoscopic model is more efficient than the microscopic model and provides more in-situ details (e.g., pedestrian movement pattern) than the macroscopic models.

  1. Numerical characterisation of one-step and three-step solar air heating collectors used for cocoa bean solar drying.

    PubMed

    Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl

    2017-12-01

    In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fenando

    2003-01-01

    In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.

  3. Comparison of sum-of-hourly and daily time step standardized ASCE Penman-Monteith reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa

    2017-10-01

    The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.

  4. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  5. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  6. A comparison of the performance of 1st order and 2nd order turbulence models when solving the RANS equations in reproducing the liquid film length unsteady response to momentum flux ratio in Gas-Centered Swirl-Coaxial Injectors in Rocket Engine Applications

    DTIC Science & Technology

    2012-06-07

    scheme for the VOF requires the use of the explicit solver to advance the solution in time. The drawback of using the explicit solver is that such ap...proach required much smaller time steps to guarantee that a converged and stable solution is obtained during each fractional time step (Global...Comparable results were obtained for the solutions with the RSM model. 50x 25x 100x25x 25x200x 0.000 0.002 0.004 0.006 0.008 0.010 0 100 200 300

  7. Campus Energy Model for Control and Performance Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-09-19

    The core of the modeling platform is an extensible block library for the MATLAB/Simulink software suite. The platform enables true co-simulation (interaction at each simulation time step) with NREL's state-of-the-art modeling tools and other energy modeling software.

  8. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    NASA Astrophysics Data System (ADS)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  9. Genetic demixing and evolution in linear stepping stone models

    NASA Astrophysics Data System (ADS)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial experiments on range expansions of inoculations of Escherichia coli and Saccharomyces cerevisiae.

  10. A scale-free network with limiting on vertices

    NASA Astrophysics Data System (ADS)

    Tang, Lian; Wang, Bin

    2010-05-01

    We propose and analyze a random graph model which explains a phenomena in the economic company network in which company may not expand its business at some time due to the limiting of money and capacity. The random graph process is defined as follows: at any time-step t, (i) with probability α(k) and independently of other time-step, each vertex vi (i≤t-1) is inactive which means it cannot be connected by more edges, where k is the degree of vi at the time-step t; (ii) a new vertex vt is added along with m edges incident with vt at one time and its neighbors are chosen in the manner of preferential attachment. We prove that the degree distribution P(k) of this random graph process satisfies P(k)∝C1k if α(ṡ) is a constant α0; and P(k)∝C2k-3 if α(ℓ)↓0 as ℓ↑∞, where C1,C2 are two positive constants. The analytical result is found to be in good agreement with that obtained by numerical simulations. Furthermore, we get the degree distributions in this model with m-varying functions by simulation.

  11. The Impact of ARM on Climate Modeling. Chapter 26

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.

    2016-01-01

    Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.

  12. Assessing dose-response effects of national essential medicine policy in China: comparison of two methods for handling data with a stepped wedge-like design and hierarchical structure.

    PubMed

    Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun

    2017-02-22

    To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose-response effect) for data from a stepped-wedge design with a hierarchical structure. The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Routinely and annually collected national data on China from 2008 to 2012. 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Agreement and differences in estimates of dose-response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). The estimated dose-response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2-4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose-response among provinces, counties and facilities were estimated, and the 'lowest' or 'highest' units by their dose-response effects were pinpointed only by the multilevel RM model. For examining dose-response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Aerial robot intelligent control method based on back-stepping

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  14. Combining Fast-Walking Training and a Step Activity Monitoring Program to Improve Daily Walking Activity After Stroke: A Preliminary Study.

    PubMed

    Danks, Kelly A; Pohlig, Ryan; Reisman, Darcy S

    2016-09-01

    To determine preliminary efficacy and to identify baseline characteristics predicting who would benefit most from fast walking training plus a step activity monitoring program (FAST+SAM) compared with fast walking training (FAST) alone in persons with chronic stroke. Randomized controlled trial with blinded assessors. Outpatient clinical research laboratory. Individuals (N=37) >6 months poststroke. Subjects were assigned to either FAST, which was walking training at their fastest possible speed on the treadmill (30min) and overground 3 times per week for 12 weeks, or FAST+SAM. The step activity monitoring program consisted of daily step monitoring with an activity monitor, goal setting, and identification of barriers to activity and strategies to overcome barriers. Daily step activity metrics (steps/day [SPD], time walking per day), walking speed, and 6-minute walk test (6MWT) distance. There was a significant effect of time for both groups, with all outcomes improving from pre- to posttraining (all P values <.05). The FAST+SAM was superior to FAST for 6MWT (P=.018), with a larger increase in the FAST+SAM group. The interventions had differential effectiveness based on baseline step activity. Sequential moderated regression models demonstrated that for subjects with baseline levels of step activity and 6MWT distances that were below the mean, the FAST+SAM intervention was more effective than FAST (1715±1584 vs 254±933 SPD; P<.05 for overall model and ΔR(2) for SPD and 6MWT). The addition of a step activity monitoring program to a fast walking training intervention may be most effective in persons with chronic stroke who have initial low levels of walking endurance and activity. Regardless of baseline performance, the FAST+SAM intervention was more effective for improving walking endurance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. Making a Computer Model of the Most Complex System Ever Built - Continuum

    Science.gov Websites

    Eastern Interconnection, all as a function of time. All told, that's about 1,000 gigabytes of data the modeling software steps forward in time, those decisions affect how the grid operates under Interconnection at five-minute intervals for one year would have required more than 400 days of computing time

  16. Evolution of Reference: A New Service Model for Science and Engineering Libraries

    ERIC Educational Resources Information Center

    Bracke, Marianne Stowell; Chinnaswamy, Sainath; Kline, Elizabeth

    2008-01-01

    This article explores the different steps involved in adopting a new service model at the University of Arizona Science-Engineering Library. In a time of shrinking budgets and changing user behavior the library was forced to rethink it reference services to be cost effective and provide quality service at the same time. The new model required…

  17. An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.

    2017-01-01

    In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735

  18. A lattice Boltzmann model with an amending function for simulating nonlinear partial differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Lin-Jie; Ma, Chang-Feng

    2010-01-01

    This paper proposes a lattice Boltzmann model with an amending function for one-dimensional nonlinear partial differential equations (NPDEs) in the form ut + αuux + βunux + γuxx + δuxxx + ζuxxxx = 0. This model is different from existing models because it lets the time step be equivalent to the square of the space step and derives higher accuracy and nonlinear terms in NPDEs. With the Chapman-Enskog expansion, the governing evolution equation is recovered correctly from the continuous Boltzmann equation. The numerical results agree well with the analytical solutions.

  19. Analysis of Time Filters in Multistep Methods

    NASA Astrophysics Data System (ADS)

    Hurl, Nicholas

    Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.

  20. Surgical planning and manual image fusion based on 3D model facilitate laparoscopic partial nephrectomy for intrarenal tumors.

    PubMed

    Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao

    2014-12-01

    Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.

  1. Mathematical Modelling of Waveguiding Techniques and Electron Transport. Volume 1.

    DTIC Science & Technology

    1984-01-01

    II& 100 ,,.,,hi , ""%’. "-.. v -.. , ., - , , .. . . . . . .. .. . . . . .. w ,, lit ( Hit -~~ ~ (i7(2J- LKI-r~p T (4\\ 01I10T J~n Ks~ (EL 1011 -A...at the end of each output time step. The difficulty here is that the last working time step is then simply what is required to hit the output time... Tabata (2 2 ) curve fit algorithm. The comparison of the energy deposition profiles for the 1.0 MeV case is given in Table 4. More complete tables are

  2. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  3. HARPA: A versatile three-dimensional Hamiltonian ray-tracing program for acoustic waves in the atmosphere above irregular terrain

    NASA Astrophysics Data System (ADS)

    Jones, R. M.; Riley, J. P.; Georges, T. M.

    1986-08-01

    The modular FORTRAN 77 computer program traces the three-dimensional paths of acoustic rays through continuous model atmospheres by numerically integrating Hamilton's equations (a differential expression of Fermat's principle). The user specifies an atmospheric model by writing closed-form formulas for its three-dimensional wind and temperature (or sound speed) distribution, and by defining the height of the reflecting terrain vs. geographic latitude and longitude. Some general-purpose models are provided, or users can readily design their own. In addition to computing the geometry of each raypath, HARPA can calculate pulse travel time, phase time, Doppler shift (if the medium varies in time), absorption, and geometrical path length. The program prints a step-by-step account of a ray's progress. The 410-page documentation describes the ray-tracing equations and the structure of the program, and provides complete instructions, illustrated by a sample case.

  4. Modelling of current loads on aquaculture net cages

    NASA Astrophysics Data System (ADS)

    Kristiansen, Trygve; Faltinsen, Odd M.

    2012-10-01

    In this paper we propose and discuss a screen type of force model for the viscous hydrodynamic load on nets. The screen model assumes that the net is divided into a number of flat net panels, or screens. It may thus be applied to any kind of net geometry. In this paper we focus on circular net cages for fish farms. The net structure itself is modelled by an existing truss model. The net shape is solved for in a time-stepping procedure that involves solving a linear system of equations for the unknown tensions at each time step. We present comparisons to experiments with circular net cages in steady current, and discuss the sensitivity of the numerical results to a set of chosen parameters. Satisfactory agreement between experimental and numerical prediction of drag and lift as function of the solidity ratio of the net and the current velocity is documented.

  5. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  6. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  7. Numerical Issues Associated with Compensating and Competing Processes in Climate Models: an Example from ECHAM-HAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai

    2013-06-26

    The purpose of this paper is to draw attention to the need for appropriate numerical techniques to represent process interactions in climate models. In two versions of the ECHAM-HAM model, different time integration methods are used to solve the sulfuric acid (H2SO4) gas evolution equation, which lead to substantially different results in the H2SO4 gas concentration and the aerosol nucleation rate. Using convergence tests and sensitivity simulations performed with various time stepping schemes, it is confirmed that numerical errors in the second model version are significantly smaller than those in version one. The use of sequential operator splitting in combinationmore » with long time step is identified as the main reason for the large systematic biases in the old model. The remaining errors in version two in the nucleation rate, related to the competition between condensation and nucleation, have a clear impact on the simulated concentration of cloud condensation nuclei in the lower troposphere. These errors can be significantly reduced by employing an implicit solver that handles production, condensation and nucleation at the same time. Lessons learned in this work underline the need for more caution when treating multi-time-scale problems involving compensating and competing processes, a common occurrence in current climate models.« less

  8. Real‐time monitoring and control of the load phase of a protein A capture step

    PubMed Central

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura

    2016-01-01

    ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789

  9. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    PubMed

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  11. Investigating calcite growth rates using a quartz crystal microbalance with dissipation (QCM-D)

    NASA Astrophysics Data System (ADS)

    Cao, Bo; Stack, Andrew G.; Steefel, Carl I.; DePaolo, Donald J.; Lammers, Laura N.; Hu, Yandi

    2018-02-01

    Calcite precipitation plays a significant role in processes such as geological carbon sequestration and toxic metal sequestration and, yet, the rates and mechanisms of calcite growth under close to equilibrium conditions are far from well understood. In this study, a quartz crystal microbalance with dissipation (QCM-D) was used for the first time to measure macroscopic calcite growth rates. Calcite seed crystals were first nucleated and grown on sensors, then growth rates of calcite seed crystals were measured in real-time under close to equilibrium conditions (saturation index, SI = log ({Ca2+}/{CO32-}/Ksp) = 0.01-0.7, where {i} represent ion activities and Ksp = 10-8.48 is the calcite thermodynamic solubility constant). At the end of the experiments, total masses of calcite crystals on sensors measured by QCM-D and inductively coupled plasma mass spectrometry (ICP-MS) were consistent, validating the QCM-D measurements. Calcite growth rates measured by QCM-D were compared with reported macroscopic growth rates measured with auto-titration, ICP-MS, and microbalance. Calcite growth rates measured by QCM-D were also compared with microscopic growth rates measured by atomic force microscopy (AFM) and with rates predicted by two process-based crystal growth models. The discrepancies in growth rates among AFM measurements and model predictions appear to mainly arise from differences in step densities, and the step velocities were consistent among the AFM measurements as well as with both model predictions. Using the predicted steady-state step velocity and the measured step densities, both models predict well the growth rates measured using QCM-D and AFM. This study provides valuable insights into the effects of reactive site densities on calcite growth rate, which may help design future growth models to predict transient-state step densities.

  12. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    NASA Astrophysics Data System (ADS)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  13. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE PAGES

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...

    2017-07-14

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  14. Computation of Acoustic Waves Through Sliding-Zone Interfaces Using an Euler/Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    1996-01-01

    The effect of a patched sliding-zone interface on the transmission of acoustic waves is examined for two- and three-dimensional model problems. A simple but general interpolation scheme at the patched boundary passes acoustic waves without distortion, provided that a sufficiently small time step is taken. A guideline is provided for the maximum permissible time step or zone speed that gives an acceptable error introduced by the sliding-zone interface.

  15. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  16. Manual for implementing a Shared Time Engineering Program (STEP) September 1980 through September 1983

    NASA Astrophysics Data System (ADS)

    Aronoff, H. I.; Leslie, J. J.; Mittleman, A. N.; Holt, S.

    1983-11-01

    This manual describes a Shared Time Engineering Program (STEP) conducted by the New England Apparel Manufacturers Association (NEAMA) headquartered in Fall River Massachusetts, and funded by the Office of Trade Adjustment Assistance of the U.S. Department of Commerce. It is addressed to industry association executives, industrial engineers and others interested in examining an innovative model of industrial engineering assistance to small plants which might be adapted to their particular needs.

  17. Pesticide uptake in potatoes: model and field experiments.

    PubMed

    Juraske, Ronnie; Vivas, Carmen S Mosquera; Velásquez, Alexander Erazo; Santos, Glenda García; Moreno, Mónica B Berdugo; Gomez, Jaime Diaz; Binder, Claudia R; Hellweg, Stefanie; Dallos, Jairo A Guerrero

    2011-01-15

    A dynamic model for uptake of pesticides in potatoes is presented and evaluated with measurements performed within a field trial in the region of Boyacá, Colombia. The model takes into account the time between pesticide applications and harvest, the time between harvest and consumption, the amount of spray deposition on soil surface, mobility and degradation of pesticide in soil, diffusive uptake and persistence due to crop growth and metabolism in plant material, and loss due to food processing. Food processing steps included were cleaning, washing, storing, and cooking. Pesticide concentrations were measured periodically in soil and potato samples from the beginning of tuber formation until harvest. The model was able to predict the magnitude and temporal profile of the experimentally derived pesticide concentrations well, with all measurements falling within the 90% confidence interval. The fraction of chlorpyrifos applied on the field during plant cultivation that eventually is ingested by the consumer is on average 10(-4)-10(-7), depending on the time between pesticide application and ingestion and the processing step considered.

  18. Simplified jet-A kinetic mechanism for combustor application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman

    1993-01-01

    Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. For hydrocarbon oxidation, detailed mechanisms are only available for the simplest types of hydrocarbons such as methane, ethane, acetylene, and propane. These detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic (CFD) models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. To simulate these conditions a very sophisticated computer model is required, which requires large computer memory capacity and long run times. Therefore, gas turbine combustion modeling has frequently been simplified by using global reaction mechanisms, which can predict only the quantities of interest: heat release rates, flame temperature, and emissions. Jet fuels are wide-boiling-range hydrocarbons with ranges extending through those of gasoline and kerosene. These fuels are chemically complex, often containing more than 300 components. Jet fuel typically can be characterized as containing 70 vol pct paraffin compounds and 25 vol pct aromatic compounds. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented here. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.

  19. Preclinic group education sessions reduce waiting times and costs at public pain medicine units.

    PubMed

    Davies, Stephanie; Quintner, John; Parsons, Richard; Parkitny, Luke; Knight, Paul; Forrester, Elizabeth; Roberts, Mary; Graham, Carl; Visser, Eric; Antill, Tracy; Packer, Tanya; Schug, Stephan A

    2011-01-01

    To assess the effects of preclinic group education sessions and system redesign on tertiary pain medicine units and patient outcomes. Prospective cohort study. Two public hospital multidisciplinary pain medicine units. People with persistent pain. A system redesign from a "traditional" model (initial individual medical appointments) to a model that delivers group education sessions prior to individual appointments. Based on Patient Triage Questionnaires patients were scheduled to attend Self-Training Educative Pain Sessions (STEPS), a two day eight hour group education program, followed by optional patient-initiated clinic appointments. Number of patients completing STEPS who subsequently requested individual outpatient clinic appointment(s); wait-times; unit cost per new patient referred; recurrent health care utilization; patient satisfaction; Global Perceived Impression of Change (GPIC); and utilized pain management strategies. Following STEPS 48% of attendees requested individual outpatient appointments. Wait times reduced from 105.6 to 16.1 weeks at one pain unit and 37.3 to 15.2 weeks at the second. Unit cost per new patient appointed reduced from $1,805 Australian Dollars (AUD) to AUD$541 (for STEPS). At 3 months, patients scored their satisfaction with "the treatment received for their pain" more positively than at baseline (change score=0.88; P=0.0003), GPIC improved (change score=0.46; P<0.0001) and mean number of active strategies utilized increased by 4.12 per patient (P=0.0004). The introduction of STEPS was associated with reduced wait-times and costs at public pain medicine units and increased both the use of active pain management strategies and patient satisfaction. Wiley Periodicals, Inc.

  20. Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED)

    PubMed Central

    Fontaine, Reid Griffith; Dodge, Kenneth A.

    2009-01-01

    Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social–cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions. PMID:20802851

  1. Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED).

    PubMed

    Fontaine, Reid Griffith; Dodge, Kenneth A

    2006-11-01

    Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social-cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions.

  2. Trends in physical activity, health-related fitness, and gross motor skills in children during a two-year comprehensive school physical activity program.

    PubMed

    Brusseau, Timothy A; Hannon, James C; Fu, You; Fang, Yi; Nam, Kahyun; Goodrum, Sara; Burns, Ryan D

    2018-01-06

    The purpose of this study was to examine the trends in school-day step counts, health-related fitness, and gross motor skills during a two-year Comprehensive School Physical Activity Program (CSPAP) in children. Longitudinal trend analysis. Participants were a sample of children (N=240; mean age=7.9±1.2 years; 125 girls, 115 boys) enrolled in five low-income schools. Outcome variables consisted of school day step counts, Body Mass Index (BMI), estimated VO 2 Peak , and gross motor skill scores assessed using the Test of Gross Motor Development-3rd Edition (TGMD-3). Measures were collected over a two-year CSPAP including a baseline and several follow-up time-points. Multi-level mixed effects models were employed to examine time trends on each continuous outcome variable. Markov-chain transition models were employed to examine time trends for derived binary variables for school day steps, BMI, and estimated VO 2 Peak . There were statistically significant time coefficients for estimated VO 2 Peak (b=1.10mL/kg/min, 95% C.I. [0.35mL/kg/min-2.53mL/kg/min], p=0.009) and TGMD-3 scores (b=7.8, 95% C.I. [6.2-9.3], p<0.001). There were no significant changes over time for school-day step counts or BMI. Boys had greater change in odds of achieving a step count associating with 30min of school day MVPA (OR=1.25, 95% C.I. [1.02-1.48], p=0.044). A two-year CSPAP related to increases in cardio-respiratory endurance and TGMD-3 scores. School day steps and BMI were primarily stable across the two-year intervention. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  4. Pharmacodynamics of Voriconazole in Children: Further Steps along the Path to True Individualized Therapy

    PubMed Central

    Huurneman, Luc J.; Neely, Michael; Veringa, Anette; Docobo Pérez, Fernando; Ramos-Martin, Virginia; Tissing, Wim J.; Alffenaar, Jan-Willem C.

    2016-01-01

    Voriconazole is the agent of choice for the treatment of invasive aspergillosis in children at least 2 years of age. The galactomannan index is a routinely used diagnostic marker for invasive aspergillosis and can be useful for following the clinical response to antifungal treatment. The aim of this study was to develop a pharmacokinetic-pharmacodynamic (PK-PD) mathematical model that links the pharmacokinetics of voriconazole with the galactomannan readout in children. Twelve children receiving voriconazole for treatment of proven, probable, and possible invasive fungal infections were studied. A previously published population PK model was used as the Bayesian prior. The PK-PD model was used to estimate the average area under the concentration-time curve (AUC) in each patient and the resultant galactomannan-time profile. The relationship between the ratio of the AUC to the concentration of voriconazole that induced half maximal killing (AUC/EC50) and the terminal galactomannan level was determined. The voriconazole concentration-time and galactomannan-time profiles were both highly variable. Despite this variability, the fit of the PK-PD model was good, enabling both the pharmacokinetics and pharmacodynamics to be described in individual children. (AUC/EC50)/15.4 predicted terminal galactomannan (P = 0.003), and a ratio of >6 suggested a lower terminal galactomannan level (P = 0.07). The construction of linked PK-PD models is the first step in developing control software that enables not only individualized voriconazole dosages but also individualized concentration targets to achieve suppression of galactomannan levels in a timely and optimally precise manner. Controlling galactomannan levels is a first critical step to maximizing clinical response and survival. PMID:26833158

  5. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  6. An Analysis of a Puff Dispersion Model for a Coastal Region.

    DTIC Science & Technology

    1982-06-01

    gril is determined by computing their movement for a finite time step using a measured wind field. The growth and buoyancy of the puffs are computed...advection step. The grid concentrations can be allowed to accumulate or simply be updated with the lat- est instantaneous value. & minimum gril concentration

  7. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  8. Assessing the performance of regional landslide early warning models: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Calvello, M.; Piciullo, L.

    2016-01-01

    A schematic of the components of regional early warning systems for rainfall-induced landslides is herein proposed, based on a clear distinction between warning models and warning systems. According to this framework an early warning system comprises a warning model as well as a monitoring and warning strategy, a communication strategy and an emergency plan. The paper proposes the evaluation of regional landslide warning models by means of an original approach, called the "event, duration matrix, performance" (EDuMaP) method, comprising three successive steps: identification and analysis of the events, i.e., landslide events and warning events derived from available landslides and warnings databases; definition and computation of a duration matrix, whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model performance by means of performance criteria and indicators applied to the duration matrix. During the first step the analyst identifies and classifies the landslide and warning events, according to their spatial and temporal characteristics, by means of a number of model parameters. In the second step, the analyst computes a time-based duration matrix with a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warning data from the municipal early warning system operating in Rio de Janeiro (Brazil).

  9. Software for Automated Reading of STEP Files by I-DEAS(trademark)

    NASA Technical Reports Server (NTRS)

    Pinedo, John

    2003-01-01

    A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.

  10. Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models

    PubMed Central

    Miller, Craig R.; Joyce, Paul; Wichman, Holly A.

    2011-01-01

    Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559

  11. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  12. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.

  14. Modelling to very high strains

    NASA Astrophysics Data System (ADS)

    Bons, P. D.; Jessell, M. W.; Griera, A.; Evans, L. A.; Wilson, C. J. L.

    2009-04-01

    Ductile strains in shear zones often reach extreme values, resulting in typical structures, such as winged porphyroclasts and several types of shear bands. The numerical simulation of the development of such structures has so far been inhibited by the low maximum strains that numerical models can normally achieve. Typical numerical models collapse at shear strains in the order of one to three. We have implemented a number of new functionalities in the numerical platform "Elle" (Jessell et al. 2001), which significantly increases the amount of strain that can be achieved and simultaneously reduces boundary effects that become increasingly disturbing at higher strain. Constant remeshing, while maintaining the polygonal phase regions, is the first step to avoid collapse of the finite-element grid required by finite-element solvers, such as Basil (Houseman et al. 2008). The second step is to apply a grain-growth routine to the boundaries of polygons that represent phase regions. This way, the development of sharp angles is avoided. A second advantage is that phase regions may merge or become separated (boudinage). Such topological changes are normally not possible in finite element deformation codes. The third step is the use of wrapping vertical model boundaries, with which optimal and unchanging model boundaries are maintained for the application of stress or velocity boundary conditions. The fourth step is to shift the model by a random amount in the vertical direction every time step. This way, the fixed horizontal boundary conditions are applied to different material points within the model every time step. Disturbing boundary effects are thus averaged out over the whole model and not localised to e.g. top and bottom of the model. Reduction of boundary effects has the additional advantage that model can be smaller and, therefore, numerically more efficient. Owing to the combination of these existing and new functionalities it is now possible to simulate the development of very high-strain structures. Jessell, M.W., Bons, P.D., Evans, L., Barr, T., Stüwe, K. 2001. Elle: a micro-process approach to the simulation of microstructures. Computers & Geosciences 27, 17-30. Houseman, G., Barr, T., Evans, L. 2008. Basil: stress and deformation in a viscous material. In: P.D. Bons, D. Koehn & M.W.Jessell (Eds.) Microdynamics Simulation. Lecture Notes in Earth Sciences 106, Springer, Berlin, 405p.

  15. Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium

    NASA Astrophysics Data System (ADS)

    González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César

    2018-01-01

    This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.

  16. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  17. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  18. FABRIC FILTER MODEL FORMAT CHANGE; VOLUME II. USER'S GUIDE

    EPA Science Inventory

    The report describes an improved mathematical model for use by control personnel to determine the adequacy of existing or proposed filter systems designed to minimize coal fly ash emissions. Several time-saving steps have been introduced to facilitate model application by Agency ...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  20. A 2-DOF model of an elastic rocket structure excited by a follower force

    NASA Astrophysics Data System (ADS)

    Brejão, Leandro F.; da Fonseca Brasil, Reyolando Manoel L. R.

    2017-10-01

    We present a two degree of freedom model of an elastic rocket structure excited by the follower force given by the motor thrust that is supposed to be always in the direction of the tangent to the deformed shape of the device at its lower tip. The model comprises two massless rigid pinned bars, initially in vertical position, connected by rotational springs. Lumped masses and dampers are considered at the connections. The generalized coordinates are the angular displacements of the bars with respect to the vertical. We derive the equations of motion via Lagrange’s equations and simulate its time evolution using Runge-Kutta 4th order time step-by-step numerical integration algorithm. Results indicate possible occurrence of stable and unstable vibrations, such as limit cycles.

  1. Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system

    NASA Astrophysics Data System (ADS)

    Freitas, Rodrigo; Frolov, Timofey; Asta, Mark

    2017-10-01

    Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.

  2. Content analysis in information flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grusho, Alexander A.; Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow; Grusho, Nick A.

    The paper deals with architecture of content recognition system. To analyze the problem the stochastic model of content recognition in information flows was built. We proved that under certain conditions it is possible to solve correctly a part of the problem with probability 1, viewing a finite section of the information flow. That means that good architecture consists of two steps. The first step determines correctly certain subsets of contents, while the second step may demand much more time for true decision.

  3. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  4. A time step criterion for the stable numerical simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Juan-Lien Ramirez, Alina; Löhnert, Stefan; Neuweiler, Insa

    2017-04-01

    The process of propagating or widening cracks in rock formations by means of fluid flow, known as hydraulic fracturing, has been gaining attention in the last couple of decades. There is growing interest in its numerical simulation to make predictions. Due to the complexity of the processes taking place, e.g. solid deformation, fluid flow in an open channel, fluid flow in a porous medium and crack propagation, this is a challenging task. Hydraulic fracturing has been numerically simulated for some years now [1] and new methods to take more of its processes into account (increasing accuracy) while modeling in an efficient way (lower computational effort) have been developed in recent years. An example is the use of the Extended Finite Element Method (XFEM), whose application originated within the framework of solid mechanics, but is now seen as an effective method for the simulation of discontinuities with no need for re-meshing [2]. While more focus has been put to the correct coupling of the processes mentioned above, less attention has been paid to the stability of the model. When using a quasi-static approach for the simulation of hydraulic fracturing, choosing an adequate time step is not trivial. This is in particular true if the equations are solved in a staggered way. The difficulty lies within the inconsistency between the static behavior of the solid and the dynamic behavior of the fluid. It has been shown that too small time steps may lead to instabilities early into the simulation time [3]. While the solid reaches a stationary state instantly, the fluid is not able to achieve equilibrium with its new surrounding immediately. This is why a time step criterion has been developed to quantify the instability of the model concerning the time step. The presented results were created with a 2D poroelastic model, using the XFEM for both the solid and the fluid phases. An embedded crack propagates following the energy release rate criteria when the fluid pressure within the crack rises. The fluid flow within the crack and in the porous medium are simulated using the mass balance for water and Darcy's law for flow. The equations for flow and deformation in the rock and that for flow in the fracture are solved in a staggered manner. The two sets of equations are coupled via Lagrange multipliers. We present a time step criterion for the stability of the scheme and illustrate this criterion with test examples of crack propagation. [1] T. Boone and A. Ingraffea. A numerical procedure for simulation of hydraulically-driven fracture propagation in poroelastic media. Int. J. Numer. Anal. Met. 14, 27-47, (1990) [2] T. Mohammadnejad and A. Khoei. An extended finite element method for hydraulic fracture propagation in deformable porous media with the cohesive crack model. Finite Elements in Analysis and Design. 73, 77-95, (2013) [3] E.W. Remij, J.J.C. Remmers, J.M. Huyghe, D.M.J. Smeulders. The enhanced local pressure model for the accurate analysis of fluid pressure driven fracture in porous materials. Comput. Methods Appl. Mech. Engrg. 286, 293-312, (2015)

  5. A splitting scheme based on the space-time CE/SE method for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    2016-08-01

    Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  6. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  7. Bacteria transport simulation using APEX model in the Toenepi watershed, New Zealand

    USDA-ARS?s Scientific Manuscript database

    The Agricultural Policy/Environmental eXtender (APEX) model is a distributed, continuous, daily-time step small watershed-scale hydrologic and water quality model. In this study, the newly developed fecal-derived bacteria fate and transport subroutine was applied and evalated using APEX model. The e...

  8. Evaluating simulations of daily discharge from large watersheds using autoregression and an index of flashiness

    USDA-ARS?s Scientific Manuscript database

    Watershed models are calibrated to simulate stream discharge as accurately as possible. Modelers will often calculate model validation statistics on aggregate (often monthly) time periods, rather than the daily step at which models typically operate. This is because daily hydrologic data exhibit lar...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.; Harrison, D. E. Jr.

    A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).

  10. Comparing an annual and daily time-step model for predicting field-scale P loss

    USDA-ARS?s Scientific Manuscript database

    Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...

  11. Height of a faceted macrostep for sticky steps in a step-faceting zone

    NASA Astrophysics Data System (ADS)

    Akutsu, Noriko

    2018-02-01

    The driving force dependence of the surface velocity and the average height of faceted merged steps, the terrace-surface slope, and the elementary step velocity are studied using the Monte Carlo method in the nonequilibrium steady state. The Monte Carlo study is based on a lattice model, the restricted solid-on-solid model with point-contact-type step-step attraction (p-RSOS model). The main focus of this paper is a change of the "kink density" on the vicinal surface. The temperature is selected to be in the step-faceting zone [N. Akutsu, AIP Adv. 6, 035301 (2016), 10.1063/1.4943400] where the vicinal surface is surrounded by the (001) terrace and the (111) faceted step at equilibrium. Long time simulations are performed at this temperature to obtain steady states for the different driving forces that influence the growth/recession of the surface. A Wulff figure of the p-RSOS model is produced through the anomalous surface tension calculated using the density-matrix renormalization group method. The characteristics of the faceted macrostep profile at equilibrium are classified with respect to the connectivity of the surface tension. This surface tension connectivity also leads to a faceting diagram, where the separated areas are, respectively, classified as a Gruber-Mullins-Pokrovsky-Talapov zone, step droplet zone, and step-faceting zone. Although the p-RSOS model is a simplified model, the model shows a wide variety of dynamics in the step-faceting zone. There are four characteristic driving forces: Δ μy,Δ μf,Δ μc o , and Δ μR . For the absolute value of the driving force, |Δ μ | is smaller than Max[ Δ μy,Δ μf] , the step attachment-detachments are inhibited, and the vicinal surface consists of (001) terraces and the (111) side surfaces of the faceted macrosteps. For Max[ Δ μy,Δ μf]<|Δ μ |<Δ μc o , the surface grows/recedes intermittently through the two-dimensional (2D) heterogeneous nucleation at the facet edge of the macrostep. For Δ μc o<|Δ μ | <Δ μR , the surface grows/recedes with the successive attachment-detachment of steps to/from a macrostep. When |Δ μ | exceeds Δ μR , the macrostep vanishes and the surface roughens kinetically. Classical 2D heterogeneous multinucleation was determined to be valid with slight modifications based on the Monte Carlo results of the step velocity and the change in the surface slope of the "terrace." The finite-size effects were also determined to be distinctive near equilibrium.

  12. One-Dimensional Fast Transient Simulator for Modeling Cadmium Sulfide/Cadmium Telluride Solar Cells

    NASA Astrophysics Data System (ADS)

    Guo, Da

    Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary alternative energy sources to fossil fuel. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides a deeper understanding of device operation and ways to improve their efficiency. Over the last two decades, polycrystalline thin-film Cadmium-Sulfide and Cadmium-Telluride (CdS/CdTe) solar cells fabricated on glass substrates have been considered as one of the most promising candidate in the photovoltaic technologies, for their similar efficiency and low costs when compared to traditional silicon-based solar cells. In this work a fast one dimensional time-dependent/steady-state drift-diffusion simulator, accelerated by adaptive non-uniform mesh and automatic time-step control, for modeling solar cells has been developed and has been used to simulate a CdS/CdTe solar cell. These models are used to reproduce transients of carrier transport in response to step-function signals of different bias and varied light intensity. The time-step control models are also used to help convergence in steady-state simulations where constrained material constants, such as carrier lifetimes in the order of nanosecond and carrier mobility in the order of 100 cm2/Vs, must be applied.

  13. Comparative-effectiveness research to aid population decision making by relating clinical outcomes and quality-adjusted life years.

    PubMed

    Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M

    2013-04-01

    Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  14. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2013-07-01

    The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.

  15. Dimensional accuracy of resultant casts made by a monophase, one-step and two-step, and a novel two-step putty/light-body impression technique: an in vitro study.

    PubMed

    Caputi, Sergio; Varvara, Giuseppe

    2008-04-01

    Dimensional accuracy when making impressions is crucial to the quality of fixed prosthodontic treatment, and the impression technique is a critical factor affecting this accuracy. The purpose of this in vitro study was to compare the dimensional accuracy of a monophase, 1- and 2-step putty/light-body, and a novel 2-step injection impression technique. A stainless steel model with 2 abutment preparations was fabricated, and impressions were made 15 times with each technique. All impressions were made with an addition-reaction silicone impression material (Aquasil) and a stock perforated metal tray. The monophase impressions were made with regular body material. The 1-step putty/light-body impressions were made with simultaneous use of putty and light-body materials. The 2-step putty/light-body impressions were made with 2-mm-thick resin-prefabricated copings. The 2-step injection impressions were made with simultaneous use of putty and light-body materials. In this injection technique, after removing the preliminary impression, a hole was made through the polymerized material at each abutment edge, to coincide with holes present in the stock trays. Extra-light-body material was then added to the preliminary impression and further injected through the hole after reinsertion of the preliminary impression on the stainless steel model. The accuracy of the 4 different impression techniques was assessed by measuring 3 dimensions (intra- and interabutment) (5-mum accuracy) on stone casts poured from the impressions of the stainless steel model. The data were analyzed by 1-way ANOVA and Student-Newman-Keuls test (alpha=.05). The stone dies obtained with all the techniques had significantly larger dimensions as compared to those of the stainless steel model (P<.01). The order for highest to lowest deviation from the stainless steel model was: monophase, 1-step putty/light body, 2-step putty/light body, and 2-step injection. Significant differences among all of the groups for both absolute dimensions of the stone dies, and their percent deviations from the stainless steel model (P<.01), were noted. The 2-step putty/light-body and 2-step injection techniques were the most dimensionally accurate impression methods in terms of resultant casts.

  16. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Latent trajectory studies: the basics, how to interpret the results, and what to report.

    PubMed

    van de Schoot, Rens

    2015-01-01

    In statistics, tools have been developed to estimate individual change over time. Also, the existence of latent trajectories, where individuals are captured by trajectories that are unobserved (latent), can be evaluated (Muthén & Muthén, 2000). The method used to evaluate such trajectories is called Latent Growth Mixture Modeling (LGMM) or Latent Class Growth Modeling (LCGA). The difference between the two models is whether variance within latent classes is allowed for (Jung & Wickrama, 2008). The default approach most often used when estimating such models begins with estimating a single cluster model, where only a single underlying group is presumed. Next, several additional models are estimated with an increasing number of clusters (latent groups or classes). For each of these models, the software is allowed to estimate all parameters without any restrictions. A final model is chosen based on model comparison tools, for example, using the BIC, the bootstrapped chi-square test, or the Lo-Mendell-Rubin test. To ease the use of LGMM/LCGA step by step in this symposium (Van de Schoot, 2015) guidelines are presented which can be used for researchers applying the methods to longitudinal data, for example, the development of posttraumatic stress disorder (PTSD) after trauma (Depaoli, van de Schoot, van Loey, & Sijbrandij, 2015; Galatzer-Levy, 2015). The guidelines include how to use the software Mplus (Muthén & Muthén, 1998-2012) to run the set of models needed to answer the research question: how many latent classes exist in the data? The next step described in the guidelines is how to add covariates/predictors to predict class membership using the three-step approach (Vermunt, 2010). Lastly, it described what essentials to report in the paper. When applying LGMM/LCGA models for the first time, the guidelines presented can be used to guide what models to run and what to report.

  18. Development of a time-dependent hurricane evacuation model for the New Orleans area : research project capsule.

    DOT National Transportation Integrated Search

    2008-08-01

    Current hurricane evacuation transportation modeling uses an approach fashioned after the : traditional four-step procedure applied in urban transportation planning. One of the limiting : features of this approach is that it models traffic in a stati...

  19. Putting mechanisms into crop production models

    USDA-ARS?s Scientific Manuscript database

    Crop simulation models dynamically predict processes of carbon, nitrogen, and water balance on daily or hourly time-steps to the point of predicting yield and production at crop maturity. A brief history of these models is reviewed, and their level of mechanism for assimilation and respiration, ran...

  20. FABRIC FILTER MODEL FORMAT CHANGE; VOLUME 1. DETAILED TECHNICAL REPORT

    EPA Science Inventory

    The report describes an improved mathematical model for use by control personnel to determine the adequacy of existing or proposed filter systems designed to minimize coal fly ash emissions. Several time-saving steps have been introduced to facilitate model application by Agency ...

  1. New Parents’ Psychological Adjustment and Trajectories of Early Parental Involvement

    PubMed Central

    Jia, Rongfang; Kotila, Letitia E.; Schoppe-Sullivan, Sarah J.; Kamp Dush, Claire M.

    2016-01-01

    Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents’ own and their partners’ psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of “mother step in” and “father step out,” as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners’ psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment. PMID:27397935

  2. Modifications to WRF's dynamical core to improve the treatment of moisture for large-eddy simulations: WRF DY-CORE MOISTURE TREATMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Endo, Satoshi; Wong, May

    Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic sub­stepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub­steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub­steps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  3. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  4. Patients with Chronic Obstructive Pulmonary Disease Walk with Altered Step Time and Step Width Variability as Compared with Healthy Control Subjects.

    PubMed

    Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas

    2017-06-01

    Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.

  5. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  7. Infiltration/cure modeling of resin transfer molded composite materials using advanced fiber architectures

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Weideman, Mark H.; Long, Edward R., Jr.; Kranbuehl, David E.; Kinsley, Philip J.; Hart, Sean M.

    1991-01-01

    A model was developed which can be used to simulate infiltration and cure of textile composites by resin transfer molding. Fabric preforms were resin infiltrated and cured using model generated optimized one-step infiltration/cure protocols. Frequency dependent electromagnetic sensing (FDEMS) was used to monitor in situ resin infiltration and cure during processing. FDEMS measurements of infiltration time, resin viscosity, and resin degree of cure agreed well with values predicted by the simulation model. Textile composites fabricated using a one-step infiltration/cure procedure were uniformly resin impregnated and void free. Fiber volume fraction measurements by the resin digestion method compared well with values predicted using the model.

  8. Future aircraft networks and schedules

    NASA Astrophysics Data System (ADS)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.

  9. Hierarchical modeling of heat transfer in silicon-based electronic devices

    NASA Astrophysics Data System (ADS)

    Goicochea Pineda, Javier V.

    In this work a methodology for the hierarchical modeling of heat transfer in silicon-based electronic devices is presented. The methodology includes three steps to integrate the different scales involved in the thermal analysis of these devices. The steps correspond to: (i) the estimation of input parameters and thermal properties required to solve the Boltzmann transport equation (BTE) for phonons by means of molecular dynamics (MD) simulations, (ii) the quantum correction of some of the properties estimated with MD to make them suitable for BTE and (iii) the numerical solution of the BTE using the lattice Boltzmann method (LBM) under the single mode relaxation time approximation subject to different initial and boundary conditions, including non-linear dispersion relations and different polarizations in the [100] direction. Each step of the methodology is validated with numerical, analytical or experimental reported data. In the first step of the methodology, properties such as, phonon relaxation times, dispersion relations, group and phase velocities and specific heat are obtained with MD at of 300 and 1000 K (i.e. molecular temperatures). The estimation of the properties considers the anhamonic nature of the potential energy function, including the thermal expansion of the crystal. Both effects are found to modify the dispersion relations with temperature. The behavior of the phonon relaxation times for each mode (i.e. longitudinal and transverse, acoustic and optical phonons) is identified using power functions. The exponents of the acoustic modes are agree with those predicted theoretically perturbation theory at high temperatures, while those for the optical modes are higher. All properties estimated with MD are validated with values for the thermal conductivity obtained from the Green-Kubo method. It is found that the relative contribution of acoustic modes to the overall thermal conductivity is approximately 90% at both temperatures. In the second step, two new quantum correction alternatives are applied to correct the results obtained with MD. The alternatives consider the quantization of the energy per phonon mode. In addition, the effect of isotope scattering is included in the phonon-phonon relaxation time values previously determined in the first step. It is found that both the quantization of the energy and the inclusion of scattering with isotopes significant reduce the contribution of high-frequency modes to the overall thermal conductivity. After these two effects are considered, the contribution of optical modes reduces to less than 2.4%. In this step, two sets of properties are obtained. The first one results from the application of quantum corrections to abovementioned properties, while the second is obtained including also the isotope scattering. These sets of properties are identified in this work as isotope-enriched silicon (isoSi) and natural silicon (natSi) and are used along other phonon relaxation time models in the last step of our methodology. Before we solve the BTE using the LBM, a new dispersive lattice Boltzmann formulation is proposed. The new dispersive formulation is based on constant lattice spacings (CLS) and flux limiters, rather than constant time steps (as previously reported). It is found that the new formulation significantly reduces the computation cost and complexity of the solution of the BTE, without affecting the thermal predictions. Lastly, in the last step of our methodology, we solve the BTE. The equation is solved under the relaxation time approximation using our thermal properties estimated for isoSi and natSi and using two phonon formulations. The phonon formulations include a gray model and the new dispersive method. For comparison purposes, the BTE is also solved using the phenomenological and theoretical phonon relaxation time models of Holland, and Han and Klemens. Different thermal predictions in steady and transient states are performed to illustrate the application of the methodology in one- and two-dimensional silicon films and in silicon-over-insulator (SOI) transistors. These include the determination of bulk and film thermal conductivities (i.e. out-of-plane and in-plane), and the transient evolution of the wall heat flux and temperature for films of different thicknesses. In addition, the physics of phonons is further analyzed in terms of the influence and behavior of acoustic and optical modes in the thermal predictions and the effect of phonon confinement in the thermal response of SOI-like transistors subject to different self-heating conditions.

  10. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  11. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  12. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  13. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity.

    PubMed

    Chen, Yumiao; Yang, Zhongliang

    2017-01-01

    Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.

  14. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    NASA Astrophysics Data System (ADS)

    Hoepfer, Matthias

    Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  15. A New Microscopic Model of the Rate- and State- Friction Evolution

    NASA Astrophysics Data System (ADS)

    Li, T.; Rubin, A. M.

    2016-12-01

    The Slip (Ruina) law and the Aging (Dieterich) law are the two most common descriptions of the evolution of "state" in rate- and state-dependent friction, behind which are the ideas of slip-dependent and time-dependent fault healing, respectively. Since the mid-1990's, friction experiments have been interpreted as demonstrating that fault healing in rock is primarily time-dependent, and that frictional strength is proportional to contact area (Dieterich and Kilgore, 1994; Beeler et al., 1994). However, a recent re-examination of the data of Beeler et al. (1994) suggests that the evidence for time-dependent healing is equivocal, while large step velocity decreases provide unequivocal evidence of slip-dependent healing (Bhattacharya et al., AGU 2016). Nonetheless, unlike the Aging law, for which see-through experiments showing growing contacts could serve as a physical model, there has been no corresponding physical picture for the Slip law. In this study, we develop a new microscopic model of friction in which each asperity has a heterogeneous strength, with individual portions "remembering" the velocity at which they came into existence. Such a scenario could arise via processes that are more efficient at the margin of a contact than within the interior (e.g., chemical diffusion). A numerical kernel for friction evolution is developed for arbitrary slip histories and an exponential distribution of asperity sizes. For velocity steps we derive an analytical expression that is essentially the Slip law. Numerical inversions show that this model performs as well as the Slip law when fitting velocity step data, but (unfortunately) without improving much the fit to slide-hold-slide data. Because "state" as defined by the Aging law has traditionally been interpreted as contact age, we also use our model to determine whether the "Aging law" actually tracks contact age for general velocity histories. As is traditional, we assume that strength increases logarithmically with age. For reasonable definitions of "age" we obtain results significantly different from the Aging law for velocity step increases. Interestingly, we can obtain an analytical solution for velocity steps that is very close to the Aging law if we adopt a definition of age that we consider to be non-physical.

  16. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models.

    PubMed

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G

    2006-01-28

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  17. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models

    NASA Astrophysics Data System (ADS)

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.

    2006-01-01

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  18. The effects of continuing care on emerging adult outcomes following residential addiction treatment.

    PubMed

    Bergman, Brandon G; Hoeppner, Bettina B; Nelson, Lindsay M; Slaymaker, Valerie; Kelly, John F

    2015-08-01

    Professional continuing care services enhance recovery rates among adults and adolescents, though less is known about emerging adults (18-25 years old). Despite benefit shown from emerging adults' participation in 12-step mutual-help organizations (MHOs), it is unclear whether participation offers benefit independent of professional continuing care services. Greater knowledge in this area would inform clinical referral and linkage efforts. Emerging adults (N=284; 74% male; 95% Caucasian) were assessed during the year after residential treatment on outpatient sessions per week, percent days in residential treatment and residing in a sober living environment, substance use disorder (SUD) medication use, active 12-step MHO involvement (e.g., having a sponsor, completing step work, contact with members outside meetings), and continuous abstinence (dichotomized yes/no). One generalized estimating equation (GEE) model tested the unique effect of each professional service on abstinence, and, in a separate GEE model, the unique effect of 12-step MHO involvement on abstinence over and above professional services, independent of individual covariates. Apart from SUD medication, all professional continuing care services were significantly associated with abstinence over and above individual factors. In the more comprehensive model, relative to zero 12-step MHO activities, odds of abstinence were 1.3 times greater if patients were involved in one activity, and 3.2 times greater if involved in five activities (lowest mean number of activities in the sample across all follow-ups). Both active involvement in 12-step MHOs and recovery-supportive, professional services that link patients with these community-based resources may enhance outcomes for emerging adults after residential treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. 12-step affiliation and attendance following treatment for comorbid substance dependence and depression: a latent growth curve mediation model.

    PubMed

    Worley, Matthew J; Tate, Susan R; McQuaid, John R; Granholm, Eric L; Brown, Sandra A

    2013-01-01

    Among substance-dependent individuals, comorbid major depressive disorder (MDD) is associated with greater severity and poorer treatment outcomes, but little research has examined mediators of posttreatment substance use outcomes within this population. Using latent growth curve models, the authors tested relationships between individual rates of change in 12-step involvement and substance use, utilizing posttreatment follow-up data from a trial of group Twelve-Step Facilitation (TSF) and integrated cognitive-behavioral therapy (ICBT) for veterans with substance dependence and MDD. Although TSF patients were higher on 12-step affiliation and meeting attendance at end-of-treatment as compared with ICBT, they also experienced significantly greater reductions in these variables during the year following treatment, ending at similar levels as ICBT. Veterans in TSF also had significantly greater increases in drinking frequency during follow-up, and this group difference was mediated by their greater reductions in 12-step affiliation and meeting attendance. Patients with comorbid depression appear to have difficulty sustaining high levels of 12-step involvement after the conclusion of formal 12-step interventions, which predicts poorer drinking outcomes over time. Modifications to TSF and other formal 12-step protocols or continued therapeutic contact may be necessary to sustain 12-step involvement and reduced drinking for patients with substance dependence and MDD.

  20. Watershed Management Optimization Support Tool (WMOST) v1: Theoretical Documentation

    EPA Science Inventory

    The Watershed Management Optimization Support Tool (WMOST) is a screening model that is spatially lumped with options for a daily or monthly time step. It is specifically focused on modeling the effect of management decisions on the watershed. The model considers water flows and ...

  1. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  2. 10 Steps to Building an Architecture for Space Surveillance Projects

    NASA Astrophysics Data System (ADS)

    Gyorko, E.; Barnhart, E.; Gans, H.

    Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.

  3. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  4. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  5. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  6. Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1997-01-01

    A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.

  7. Evaluation of the AnnAGNPS model for predicting runoff and sediment yield in a small Mediterranean agricultural watershed in Navarre (Spain)

    USDA-ARS?s Scientific Manuscript database

    AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...

  8. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    NASA Astrophysics Data System (ADS)

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  9. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  10. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    PubMed Central

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707

  11. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China.

    PubMed

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  12. Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example

    USGS Publications Warehouse

    Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.

    2016-02-10

    The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.

  13. Developing a methodology for real-time trading of water withdrawal and waste load discharge permits in rivers.

    PubMed

    Soltani, Maryam; Kerachian, Reza

    2018-04-15

    In this paper, a new methodology is proposed for the real-time trading of water withdrawal and waste load discharge permits in agricultural areas along the rivers. Total Dissolved Solids (TDS) is chosen as an indicator of river water quality and the TDS load that agricultural water users discharge to the river are controlled by storing a part of return flows in some evaporation ponds. Available surface water withdrawal and waste load discharge permits are determined using a non-linear multi-objective optimization model. Total available permits are then fairly reallocated among agricultural water users, proportional to their arable lands. Water users can trade their water withdrawal and waste load discharge permits simultaneously, in a bilateral, step by step framework, which takes advantage of differences in their water use efficiencies and agricultural return flow rates. A trade that would take place at each time step results in either more benefit or less diverted return flow. The Nucleolus cooperative game is used to redistribute the benefits generated through trades in different time steps. The proposed methodology is applied to PayePol region in the Karkheh River catchment, southwest Iran. Predicting that 1922.7 Million Cubic Meters (MCM) of annual flow is available to agricultural lands at the beginning of the cultivation year, the real-time optimization model estimates the total annual benefit to reach 46.07 million US Dollars (USD), which requires 6.31 MCM of return flow to be diverted to the evaporation ponds. Fair reallocation of the permits, changes these values to 35.38 million USD and 13.69 MCM, respectively. Results illustrate the effectiveness of the proposed methodology in the real-time water and waste load allocation and simultaneous trading of permits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Evaluating a primary care psychology service in Ireland: a survey of stakeholders and psychologists.

    PubMed

    Corcoran, Mark; Byrne, Michael

    2017-05-01

    Primary care psychology services (PCPS) represent an important resource in meeting the various health needs of our communities. This study evaluated the PCPS in a two-county area within the Republic of Ireland. The objectives were to (i) examine the viewpoints of the service for both psychologists and stakeholders (healthcare professionals only) and (ii) examine the enactment of the stepped care model of service provision. Separate surveys were sent to primary care psychologists (n = 8), general practitioners (GPs; n = 69) and other stakeholders in the two counties. GPs and stakeholders were required to rate the current PCPS. The GP survey specifically examined referrals to the PCPS and service configuration, while the stakeholder survey also requested suggestions for future service provision. Psychologists were required to provide information regarding their workload, time spent on certain tasks and productivity ideas. Referral numbers, waiting lists and waiting times were also obtained. All 8 psychologists, 23 GPs (33% response rate) and 37 stakeholders (unknown response rate) responded. GPs and stakeholders reported access to the PCPS as a primary concern, with waiting times of up to 80 weeks in some areas. Service provision to children and adults was uneven between counties. A stepped care model of service provision was not observed. Access can be improved by further implementation of a stepped care service, developing a high-throughput service for adults (based on a stepped care model), and employing a single waiting list for each county to ensure equal access. © 2016 John Wiley & Sons Ltd.

  15. Assessing the performance of regional landslide early warning models: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Calvello, M.; Piciullo, L.

    2015-10-01

    The paper proposes the evaluation of the technical performance of a regional landslide early warning system by means of an original approach, called EDuMaP method, comprising three successive steps: identification and analysis of the Events (E), i.e. landslide events and warning events derived from available landslides and warnings databases; definition and computation of a Duration Matrix (DuMa), whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model Performance (P) by means of performance criteria and indicators applied to the duration matrix. During the first step, the analyst takes into account the features of the warning model by means of ten input parameters, which are used to identify and classify landslide and warning events according to their spatial and temporal characteristics. In the second step, the analyst computes a time-based duration matrix having a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The proposed method is based on a framework clearly distinguishing between local and regional landslide early warning systems as well as among correlation laws, warning models and warning systems. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warnings data from the municipal early warning system operating in Rio de Janeiro (Brazil).

  16. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less

  17. Adaptive time steps in trajectory surface hopping simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less

  18. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  19. Community detection using Kernel Spectral Clustering with memory

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Suykens, Johan A. K.

    2013-02-01

    This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.

  20. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  1. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  2. Short hold times in dynamic vapor sorption measurements mischaracterize the equilibrium moisture content of wood

    Treesearch

    Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka

    2017-01-01

    Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...

  3. Latent Subgroup Analysis of a Randomized Clinical Trial Through a Semiparametric Accelerated Failure Time Mixture Model

    PubMed Central

    Altstein, L.; Li, G.

    2012-01-01

    Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608

  4. Investigation of Particle Accumulation, Chemosensitivity and Thermosensitivity for Effective Solid Tumor Therapy Using Thermosensitive Liposomes and Hyperthermia.

    PubMed

    Lokerse, Wouter J M; Bolkestein, Michiel; Ten Hagen, Timo L M; de Jong, Marion; Eggermont, Alexander M M; Grüll, Holger; Koning, Gerben A

    2016-01-01

    Doxorubicin (Dox) loaded thermosensitive liposomes (TSLs) have shown promising results for hyperthermia-induced local drug delivery to solid tumors. Typically, the tumor is heated to hyperthermic temperatures (41-42 °C), which induced intravascular drug release from TSLs within the tumor tissue leading to high local drug concentrations (1-step delivery protocol). Next to providing a trigger for drug release, hyperthermia (HT) has been shown to be cytotoxic to tumor tissue, to enhance chemosensitivity and to increase particle extravasation from the vasculature into the tumor interstitial space. The latter can be exploited for a 2-step delivery protocol, where HT is applied prior to i.v. TSL injection to enhance tumor uptake, and after 4 hours waiting time for a second time to induce drug release. In this study, we compare the 1- and 2-step delivery protocols and investigate which factors are of importance for a therapeutic response. In murine B16 melanoma and BFS-1 sarcoma cell lines, HT induced an enhanced Dox uptake in 2D and 3D models, resulting in enhanced chemosensitivity. In vivo, therapeutic efficacy studies were performed for both tumor models, showing a therapeutic response for only the 1-step delivery protocol. SPECT/CT imaging allowed quantification of the liposomal accumulation in both tumor models at physiological temperatures and after a HT treatment. A simple two compartment model was used to derive respective rates for liposomal uptake, washout and retention, showing that the B16 model has a twofold higher liposomal uptake compared to the BFS-1 tumor. HT increases uptake and retention of liposomes in both tumors models by the same factor of 1.66 maintaining the absolute differences between the two models. Histology showed that HT induced apoptosis, blood vessel integrity and interstitial structures are important factors for TSL accumulation in the investigated tumor types. However, modeling data indicated that the intraliposomal Dox fraction did not reach therapeutic relevant concentrations in the tumor tissue in a 2-step delivery protocol due to the leaking of the drug from its liposomal carrier providing an explanation for the observed lack of efficacy.

  5. A computer model for predicting grapevine cold hardiness

    USDA-ARS?s Scientific Manuscript database

    We developed a robust computer model of grapevine bud cold hardiness that will aid in the anticipation of and response to potential injury from fluctuations in winter temperature and from extreme cold events. The model uses time steps of 1 day along with the measured daily mean air temperature to ca...

  6. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  7. THE INFLUENCE OF MODEL TIME STEP ON THE RELATIVE SENSITIVITY OF POPULATION GROWTH TO SURVIVAL, GROWTH AND REPRODUCTION

    EPA Science Inventory

    Matrix population models are often used to extrapolate from life stage-specific stressor effects on survival and reproduction to population-level effects. Demographic elasticity analysis of a matrix model allows an evaluation of the relative sensitivity of population growth rate ...

  8. THE INFLUENCE OF MODEL TIME STEP ON THE RELATIVE SENSITIVIY OF POPULATION GROWTH RATE TO REPRODUCTION

    EPA Science Inventory

    In recent years there has been an increasing interest in using population models in environmental assessments. Matrix population models represent a valuable tool for extrapolating from life stage-specific stressor effects on survival and reproduction to effects on finite populati...

  9. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  10. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  11. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  12. Index extraction for electromagnetic field evaluation of high power wireless charging system.

    PubMed

    Park, SangWook

    2017-01-01

    This paper presents the precise dosimetry for highly resonant wireless power transfer (HR-WPT) system using an anatomically realistic human voxel model. The dosimetry for the HR-WPT system designed to operate at 13.56 MHz frequency, which one of the ISM band frequency band, is conducted in the various distances between the human model and the system, and in the condition of alignment and misalignment between transmitting and receiving circuits. The specific absorption rates in the human body are computed by the two-step approach; in the first step, the field generated by the HR-WPT system is calculated and in the second step the specific absorption rates are computed with the scattered field finite-difference time-domain method regarding the fields obtained in the first step as the incident fields. The safety compliance for non-uniform field exposure from the HR-WPT system is discussed with the international safety guidelines. Furthermore, the coupling factor concept is employed to relax the maximum allowable transmitting power. Coupling factors derived from the dosimetry results are presented. In this calculation, the external magnetic field from the HR-WPT system can be relaxed by approximately four times using coupling factor in the worst exposure scenario.

  13. Modeling the stepping mechanism in negative lightning leaders

    NASA Astrophysics Data System (ADS)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  14. Oceanic signals in rapid polar motion: results from a barotropic forward model with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Schindelegger, Michael; Quinn, Katherine J.; Ponte, Rui M.

    2017-04-01

    Numerical modeling of non-tidal variations in ocean currents and bottom pressure has played a key role in closing the excitation budget of Earth's polar motion for a wide range of periodicities. Non-negligible discrepancies between observations and model accounts of pole position changes prevail, however, on sub-monthly time scales and call for examination of hydrodynamic effects usually omitted in general circulation models. Specifically, complete hydrodynamic cores must incorporate self-attraction and loading (SAL) feedbacks on redistributed water masses, effects that produces ocean bottom pressure perturbations of typically about 10% relative to the computed mass variations. Here, we report on a benchmark simulation with a near-global, barotropic forward model forced by wind stress, atmospheric pressure, and a properly calculated SAL term. The latter is obtained by decomposing ocean mass anomalies on a 30-minute grid into spherical harmonics at each time step and applying Love numbers to account for seafloor deformation and changed gravitational attraction. The increase in computational time at each time step is on the order of 50%. Preliminary results indicate that the explicit consideration of SAL in the forward runs increases the fidelity of modeled polar motion excitations, in particular on time scales shorter than 5 days as evident from cross spectral comparisons with geodetic excitation. Definite conclusions regarding the relevance of SAL in simulating rapid polar motion are, however, still hampered by the model's incomplete domain representation that excludes parts of the highly energetic Arctic Ocean.

  15. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    NASA Astrophysics Data System (ADS)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  16. Reduced Order Models for Reactions of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Kober, Edward

    The formulation of reduced order models for the reaction chemistry of energetic materials under high pressures is needed for the development of mesoscale models in the areas of initiation, deflagration and detonation. Phenomenologically, 4-8 step models have been formulated from the analysis of cook-off data by analyzing the temperature rise of heated samples. Reactive molecular dynamics simulations have been used to simulate many of these processes, but reducing the results of those simulations to simple models has not been achieved. Typically, these efforts have focused on identifying molecular species and detailing specific chemical reactions. An alternative approach is presented here that is based on identifying the coordination geometries of each atom in the simulation and tracking classes of reactions by correlated changes in these geometries. Here, every atom and type of reaction is documented for every time step; no information is lost from unsuccessful molecular identification. Principal Component Analysis methods can then be used to map out the effective chemical reaction steps. For HMX and TATB decompositions simulated with ReaxFF, 90% of the data can be explained by 4-6 steps, generating models similar to those from the cook-off analysis. By performing these simulations at a variety of temperatures and pressures, both the activation and reaction energies and volumes can then be extracted.

  17. Real-time modeling of heat distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  18. Users Manual for the Geospatial Stream Flow Model (GeoSFM)

    USGS Publications Warehouse

    Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James

    2008-01-01

    The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.

  19. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  20. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  1. Development of a human cadaver model for training in laparoscopic donor nephrectomy.

    PubMed

    Sutton, Erica R H; Billeter, Adrian; Druen, Devin; Roberts, Henry; Rice, Jonathan

    2017-06-01

    The organ procurement network recommends a surgeon record 15 cases as surgeon or assistant for laparoscopic donor nephrectomies (LDN) prior to independent practice. The literature suggests that the learning curve for improved perioperative and patient outcomes is closer to 35 cases. In this article, we describe our development of a model utilizing fresh tissue and objective, quantifiable endpoints to document surgical progress, and efficiency in each of the major steps involved in LDN. Phase I of model development focused on the modifications necessary to maintain visualization for laparoscopic surgery in a human cadaver. Phase II tested proposed learner-based metrics of procedural competency for multiport LDN by timing procedural steps of LDN in a novice learner. Phases I and II required 12 and nine cadavers, with a total of 35 kidneys utilized. The following metrics improved with trial number for multiport LDN: time taken for dissection of the gonadal vein, ureter, renal hilum, adrenal and lumbrical veins, simulated warm ischemic time (WIT), and operative time. Human cadavers can be used for training in LDN as evidenced by improvements in timed learner-based metrics. This simulation-based model fills a gap in available training options for surgeons. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Integrating Thermodynamic Models in Geodynamic Simulations: The Example of the Community Software ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.

    2017-12-01

    Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.

  3. Regionally Implicit Discontinuous Galerkin Methods for Solving the Relativistic Vlasov-Maxwell System Submitted to Iowa State University

    NASA Astrophysics Data System (ADS)

    Guthrey, Pierson Tyler

    The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.

  4. Daily rainfall forecasting for one year in a single run using Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Poornima; Jothiprakash, V.

    2018-06-01

    Effective modelling and prediction of smaller time step rainfall is reported to be very difficult owing to its highly erratic nature. Accurate forecast of daily rainfall for longer duration (multi time step) may be exceptionally helpful in the efficient planning and management of water resources systems. Identification of inherent patterns in a rainfall time series is also important for an effective water resources planning and management system. In the present study, Singular Spectrum Analysis (SSA) is utilized to forecast the daily rainfall time series pertaining to Koyna watershed in Maharashtra, India, for 365 days after extracting various components of the rainfall time series such as trend, periodic component, noise and cyclic component. In order to forecast the time series for longer time step (365 days-one window length), the signal and noise components of the time series are forecasted separately and then added together. The results of the study show that the method of SSA could extract the various components of the time series effectively and could also forecast the daily rainfall time series for longer duration such as one year in a single run with reasonable accuracy.

  5. Time limited field of regard search

    NASA Astrophysics Data System (ADS)

    Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho

    2005-05-01

    Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.

  6. Inverting pump-probe spectroscopy for state tomography of excitonic systems.

    PubMed

    Hoyer, Stephan; Whaley, K Birgitta

    2013-04-28

    We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.

  7. Development of an advanced system identification technique for comparing ADAMS analytical results with modal test data for a MICON 65/13 wind turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialasiewicz, J.T.

    1995-07-01

    This work uses the theory developed in NREL/TP--442-7110 to analyze simulated data from an ADAMS (Automated Dynamic Analysis of Mechanical Systems) model of the MICON 65/13 wind turbine. The Observer/Kalman Filter identification approach is expanded to use input-output time histories from ADAMS simulations or structural test data. A step by step outline is offered on how the tools developed in this research, can be used for validation of the ADAMS model.

  8. Modeling of Hall Thruster Lifetime and Erosion Mechanisms (Preprint)

    DTIC Science & Technology

    2007-09-01

    Hall thruster plasma discharge has been upgraded to simulate the erosion of the thruster acceleration channel, the degradation of which is the main life-limiting factor of the propulsion system. Evolution of the thruster geometry as a result of material removal due to sputtering is modeled by calculating wall erosion rates, stepping the grid boundary by a chosen time step and altering the computational mesh between simulation runs. The code is first tuned to predict the nose cone erosion of a 200 W Busek Hall thruster , the BHT-200. Simulated erosion

  9. The STEP model: Characterizing simultaneous time effects on practice for flight simulator performance among middle-aged and older pilots

    PubMed Central

    Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C.

    2015-01-01

    Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real world tasks. In this study, we took advantage of existing practice data from five simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by FAA proficiency ratings). We developed a new STEP (Simultaneous Time Effects on Practice) model to: (1) model the simultaneous effects of practice and interval on performance of the five flights, and (2) examine the effects of selected covariates (age, flight expertise, and three composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intra-individual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with either practice or interval. Results indicate that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real world tasks. PMID:26280383

  10. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-07

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  11. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  12. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  14. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  15. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2014-04-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.

  16. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2013-11-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, the other way out is developed: to face human agency squarely, and direct the modeling approach to the human agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries, requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics as these provide the context within which human agency, is acted out.

  17. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    NASA Astrophysics Data System (ADS)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.

  18. CFD analysis of a scramjet combustor with cavity based flame holders

    NASA Astrophysics Data System (ADS)

    Kummitha, Obula Reddy; Pandey, Krishna Murari; Gupta, Rajat

    2018-03-01

    Numerical analysis of a scramjet combustor with different cavity flame holders has been carried out using ANSYS 16 - FLUENT tool. In this research article the internal fluid flow behaviour of the scramjet combustor with different cavity based flame holders have been discussed in detail. Two dimensional Reynolds-Averaged Navier-Stokes governing(RANS) equations and shear stress turbulence (SST) k - ω model along with finite rate/eddy dissipation chemistry turbulence have been considered for modelling chemical reacting flows. Due to the advantage of less computational time, global one step reaction mechanism has been used for combustion modelling of hydrogen and air. The performance of the scramjet combustor with two different cavities namely spherical and step cavity has been compared with the standard DLR scramjet. From the comparison of numerical results, it is found that the development of recirculation regions and additional shock waves from the edge of cavity flame holder is increased. And also it is observed that with the cavity flame holder the residence time of air in the scramjet combustor is also increased and achieved stabilized combustion. From this research analysis, it has been found that the mixing and combustion efficiency of scramjet combustor with step cavity design is optimum as compared to other models.

  19. Model of multistep electron transfer in a single-mode polar medium

    NASA Astrophysics Data System (ADS)

    Feskov, S. V.; Yudanov, V. V.

    2017-09-01

    A mathematical model of multistep photoinduced electron transfer (PET) in a polar medium with a single relaxation time (Debye solvent) is developed. The model includes the polarization nonequilibrity formed in the vicinity of the donor-acceptor molecular system at the initial steps of photoreaction and its influence on the subsequent steps of PET. It is established that the results from numerical simulation of transient luminescence spectra of photoexcited donor-acceptor complexes (DAC) conform to calculated data obtained on the basis of the familiar experimental technique used to measure the relaxation function of solvent polarization in the vicinity of DAC in the picosecond and subpicosecond ranges.

  20. Modeling the Secondary Drying Stage of Freeze Drying: Development and Validation of an Excel-Based Model.

    PubMed

    Sahni, Ekneet K; Pikal, Michael J

    2017-03-01

    Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Weerts, A. H.; Hazenberg, P.; Torfs, P. J. J. F.; Uijlenhoet, R.

    2012-09-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.

  2. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    NASA Astrophysics Data System (ADS)

    Dessens, Olivier

    2016-04-01

    Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of analysing GCMs with the step-experiments. Acknowledgments: This work is supported by the FP7 HELIX project (www.helixclimate.eu) References: Anandarajah, G., Pye, S., Usher, W., Kesicki, F., & Mcglade, C. (2011). TIAM-UCL Global model documentation. https://www.ucl.ac.uk/energy-models/models/tiam-ucl/tiam-ucl-manual Good, P., Gregory, J. M., Lowe, J. A., & Andrews, T. (2013). Abrupt CO2 experiments as tools for predicting and understanding CMIP5 representative concentration pathway projections. Climate Dynamics, 40(3-4), 1041-1053.

  3. Climate Framework for Uncertainty, Negotiation, and Distribution (FUND)

    EPA Science Inventory

    FUND is an Integrated Assessment model that links socioeconomic, technology, and emission scenarios with atmospheric chemistry, climate dynamics, sea level rise, and the resulting economic impacts. The model runs in time-steps of one year from 1950 to 2300, and distinguishes 16 m...

  4. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  5. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  6. Adjustment to time of use pricing: Persistence of habits or change

    NASA Astrophysics Data System (ADS)

    Rebello, Derrick Michael

    1999-11-01

    Generally the dynamics related to residential electricity consumption under TOU rates have not been analyzed completely. A habit persistence model is proposed to account for the dynamics that may be present as a result of recurring habits or lack of information about the effects of shifting load across TOU periods. In addition, the presence of attrition bias necessitated a two-step estimation approach. The decision to remain in the program modeled in the first-step, while demand for electricity was estimated in the second-step. Results show that own-price effects and habit persistence have the most significant effect the model. The habit effects, which while small in absolute terms, are significant. Elasticity estimates show that electricity consumption is inelastic during all periods of the day. Estimates of the long-run elasticities were nearly identical to short-run estimates, showing little or no adjustment across time. Cross-price elasticities indicate a willingness to substitute consumption across periods implying that TOU goods are weak substitutes. The most significant substitution occurs during the period of 5:00 PM to 9:00 PM, when most individuals are likely to be home and active.

  7. Calculation of muscle loading and joint contact forces during the rock step in Irish dance.

    PubMed

    Shippen, James M; May, Barbara

    2010-01-01

    A biomechanical model for the analysis of dancers and their movements is described. The model consisted of 31 segments, 35 joints, and 539 muscles, and was animated using movement data obtained from a three-dimensional optical tracking system that recorded the motion of dancers. The model was used to calculate forces within the muscles and contact forces at the joints of the dancers in this study. Ground reaction forces were measured using force plates mounted in a sprung floor. The analysis procedure is generic and can be applied to any dance form. As an exemplar of the application process an Irish dance step, the rock, was analyzed. The maximum ground reaction force found was 4.5 times the dancer's body weight. The muscles connected to the Achilles tendon experienced a maximum force comparable to their maximal isometric strength. The contact force at the ankle joint was 14 times body weight, of which the majority of the force was due to muscle contraction. It is suggested that as the rock step produces high forces, and therefore the potential to cause injury, its use should be carefully monitored.

  8. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  9. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  10. Algorithmically scalable block preconditioner for fully implicit shallow-water equations in CAM-SE

    DOE PAGES

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    2014-10-19

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  11. Motion of kinesin in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Knoops, Gert; Vanderzande, Carlo

    2018-05-01

    Kinesin is a molecular motor that transports cargo along microtubules. The results of many in vitro experiments on kinesin-1 are described by kinetic models in which one transition corresponds to the forward motion and subsequent binding of the tethered motor head. We argue that in a viscoelastic medium like the cytosol of a cell this step is not Markov and has to be described by a nonexponential waiting time distribution. We introduce a semi-Markov kinetic model for kinesin that takes this effect into account. We calculate, for arbitrary waiting time distributions, the moment generating function of the number of steps made, and determine from this the average velocity and the diffusion constant of the motor. We illustrate our results for the case of a waiting time distribution that is Weibull. We find that for realistic parameter values, viscoelasticity decreases the velocity and the diffusion constant, but increases the randomness (or Fano factor).

  12. Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama

    2001-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.

  13. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  14. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  15. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  16. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  17. Single-crossover recombination in discrete time.

    PubMed

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  18. Measuring gravel transport and dispersion in a mountain river using passive radio tracers

    USGS Publications Warehouse

    Bradley, D. N.; Tucker, G. E.

    2012-01-01

    Random walk models of fluvial sediment transport recognize that grains move intermittently, with short duration steps separated by rests that are comparatively long. These models are built upon the probability distributions of the step length and the resting time. Motivated by these models, tracer experiments have attempted to measure directly the steps and rests of sediment grains in natural streams. This paper describes results from a large tracer experiment designed to test stochastic transport models. We used passive integrated transponder (PIT) tags to label 893 coarse gravel clasts and placed them in Halfmoon Creek, a small alpine stream near Leadville, Colorado, USA. The PIT tags allow us to locate and identify tracers without picking them up or digging them out of the streambed. They also enable us to find a very high percentage of our rocks, 98% after three years and 96% after the fourth year. We use the annual tracer displacement to test two stochastic transport models, the Einstein–Hubbell–Sayre (EHS) model and the Yang–Sayre gamma-exponential model (GEM). We find that the GEM is a better fit to the observations, particularly for slower moving tracers and suggest that the strength of the GEM is that the gamma distribution of step lengths approximates a compound Poisson distribution. Published in 2012. This article is a US Government work and is in the public domain in the USA.

  19. Data Needs and Modeling of the Upper Atmosphere

    NASA Astrophysics Data System (ADS)

    Brunger, M. J.; Campbell, L.

    2007-04-01

    We present results from our enhanced statistical equilibrium and time-step codes for atmospheric modeling. In particular we use these results to illustrate the role of electron-driven processes in atmospheric phenomena and the sensitivity of the model results to data inputs such as integral cross sections, dissociative recombination rates and chemical reaction rates.

  20. Toward Modeling the Learner's Personality Using Educational Games

    ERIC Educational Resources Information Center

    Essalmi, Fathi; Tlili, Ahmed; Ben Ayed, Leila Jemni; Jemmi, Mohamed

    2017-01-01

    Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this…

  1. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  2. A model of icebergs and sea ice in a joint continuum framework

    NASA Astrophysics Data System (ADS)

    Vaňková, Irena; Holland, David M.

    2017-04-01

    The ice mélange, a mixture of sea ice and icebergs, often present in front of tidewater glaciers in Greenland or ice shelves in Antarctica, can have a profound effect on the dynamics of the ice-ocean system. The current inability to numerically model the ice mélange motivates a new modeling approach proposed here. A continuum sea-ice model is taken as a starting point and icebergs are represented as thick and compact pieces of sea ice held together by large tensile and shear strength selectively introduced into the sea ice rheology. In order to modify the rheology correctly, a semi-Lagrangian time stepping scheme is introduced and at each time step a Lagrangian grid is constructed such that iceberg shape is preserved exactly. With the proposed treatment, sea ice and icebergs are considered a single fluid with spatially varying rheological properties, mutual interactions are thus automatically included without the need of further parametrization. An important advantage of the presented framework for an ice mélange model is its potential to be easily included in existing climate models.

  3. Evaluation of the coupled COSMO-CLM+NEMO-Nordic model with focus on North and Baltic seas

    NASA Astrophysics Data System (ADS)

    Lenhardt, J.; Pham, T. V.; Früh, B.; Brauch, J.

    2017-12-01

    The region east of the Baltic Sea has been identified as a hot-spot of climate change by Giorgi, 2006, on the base of temperature and precipitation variability. For this purpose, the atmosphere model COSMO-CLM has been coupled to the ocean model NEMO, including the sea ice model LIM3, via the OASIS3-MCT coupler (Pham et al., 2014). The coupler interpolates heat, fresh water, momentum fluxes, sea level pressure and the fraction of sea ice at the interface in space and time. Our aim is to find an optimal configuration of the already existing coupled regional atmospheric-ocean model COSMO-CLM+NEMO-Nordic. So far results for the North- and Baltic seas show that the coupled run has large biases compared with the E-OBS reference data. Therefore, additional simulation evaluations are planned by the use of independent satellite observation data (e.g. Copernicus, EURO4M). We have performed a series of runs with the coupled COSMO-CLM+NEMO-Nordic model to find out about differences of model outputs due to different coupling time steps. First analyses of COSMO-CLM 2m temperatures let presume that different coupling time steps have an impact on the results of the coupled model run. Additional tests over a longer period of time are conducted to understand whether the signal-to-noise ratio could influence the bias. The results will be presented in our poster.

  4. Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walton, C C; Gilmer, G H; Wemhoff, A P

    2007-09-26

    This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magneticmore » field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.« less

  5. Technical note: 3-hourly temporal downscaling of monthly global terrestrial biosphere model net ecosystem exchange

    DOE PAGES

    Fisher, Joshua B.; Sikka, Munish; Huntzinger, Deborah N.; ...

    2016-07-29

    Here, the land surface provides a boundary condition to atmospheric forward and flux inversion models. These models require prior estimates of CO 2 fluxes at relatively high temporal resolutions (e.g., 3-hourly) because of the high frequency of atmospheric mixing and wind heterogeneity. However, land surface model CO 2 fluxes are often provided at monthly time steps, typically because the land surface modeling community focuses more on time steps associated with plant phenology (e.g., seasonal) than on sub-daily phenomena. Here, we describe a new dataset created from 15 global land surface models and 4 ensemble products in the Multi-scale Synthesis andmore » Terrestrial Model Intercomparison Project (MsTMIP), temporally downscaled from monthly to 3-hourly output. We provide 3-hourly output for each individual model over 7 years (2004–2010), as well as an ensemble mean, a weighted ensemble mean, and the multi-model standard deviation. Output is provided in three different spatial resolutions for user preferences: 0.5° × 0.5°, 2.0° × 2.5°, and 4.0° × 5.0° (latitude × longitude).« less

  6. Application of 6D Building Information Model (6D BIM) for Business-storage Building in Slovenia

    NASA Astrophysics Data System (ADS)

    Pučko, Zoran; Vincek, Dražen; Štrukelj, Andrej; Šuman, Nataša

    2017-10-01

    The aim of this paper is to present an application of 6D building information modelling (6D BIM) on a real business-storage building in Slovenia. First, features of building maintenance in general are described according to the current Slovenian legislation, and also a general principle of BIM is given. After that, step-by-step activities for modelling 6D BIM are exposed, namely from Element list for maintenance, determination of their lifetime and service measures, cost analysing and time analysing to 6D BIM modelling. The presented 6D BIM model is designed in a unique way in which cost analysis is performed as 5D BIM model with linked data to use BIM Construction Project Management Software (Vico Office), integrated with 3D BIM model, whereas time analysis as 4D BIM model is carried out as non-linked data with the help of Excel (without connection to 3D BIM model). The paper is intended to serve as a guide to the building owners to prepare 6D BIM and to provide an insight into the relevant dynamic information about intervals and costs for execution of maintenance works in the whole building lifecycle.

  7. A new indicator framework for quantifying the intensity of the terrestrial water cycle

    NASA Astrophysics Data System (ADS)

    Huntington, Thomas G.; Weiskel, Peter K.; Wolock, David M.; McCabe, Gregory J.

    2018-04-01

    A quantitative framework for characterizing the intensity of the water cycle over land is presented, and illustrated using a spatially distributed water-balance model of the conterminous United States (CONUS). We approach water cycle intensity (WCI) from a landscape perspective; WCI is defined as the sum of precipitation (P) and actual evapotranspiration (AET) over a spatially explicit landscape unit of interest, averaged over a specified time period (step) of interest. The time step may be of any length for which data or simulation results are available (e.g., sub-daily to multi-decadal). We define the storage-adjusted runoff (Q‧) as the sum of actual runoff (Q) and the rate of change in soil moisture storage (ΔS/Δt, positive or negative) during the time step of interest. The Q‧ indicator is demonstrated to be mathematically complementary to WCI, in a manner that allows graphical interpretation of their relationship. For the purposes of this study, the indicators were demonstrated using long-term, spatially distributed model simulations with an annual time step. WCI was found to increase over most of the CONUS between the 1945 to 1974 and 1985 to 2014 periods, driven primarily by increases in P. In portions of the western and southeastern CONUS, Q‧ decreased because of decreases in Q and soil moisture storage. Analysis of WCI and Q‧ at temporal scales ranging from sub-daily to multi-decadal could improve understanding of the wide spectrum of hydrologic responses that have been attributed to water cycle intensification, as well as trends in those responses.

  8. Analyzing developmental processes on an individual level using nonstationary time series modeling.

    PubMed

    Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E

    2009-01-01

    Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.

  9. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  10. Fast and scalable purification of a therapeutic full-length antibody based on process crystallization.

    PubMed

    Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch

    2013-09-01

    The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.

  11. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  12. A parallelization method for time periodic steady state in simulation of radio frequency sheath dynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun

    2017-10-01

    In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.

  13. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  14. The Wageningen Lowland Runoff Simulator (WALRUS): a Novel Open Source Rainfall-Runoff Model for Areas with Shallow Groundwater

    NASA Astrophysics Data System (ADS)

    Brauer, C.; Teuling, R.; Torfs, P.; Uijlenhoet, R.

    2014-12-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models which are often used in lowland regions and simple, parametric models which have mostly been developed for mountainous catchments. This parametric rainfall-runoff model can be used all over the world, both in freely draining lowland catchments and polders with controlled water levels. Here, we present the model implementation and our recent experience in training students and practitioners to use the model. WALRUS has several advantages that facilitate practical application. Firstly, WALRUS is computationally efficient, which allows for operational forecasting and uncertainty estimation by running ensembles. Secondly, the code is set-up such that it can be used by both practitioners and researchers. For direct use by practitioners, defaults are implemented for relations between model variables and for the computation of initial conditions based on discharge only, leaving only four parameters which require calibration. For research purposes, the defaults can easily be changed. Finally, an approach for flexible time steps increases numerical stability and makes model parameter values independent of time step size, which facilitates use of the model with the same parameter set for multi-year water balance studies as well as detailed analyses of individual flood peaks. The open source model code is currently implemented in R and compiled into a package. This package will be made available through the R CRAN server. A small massive open online course (MOOC) is being developed to give students, researchers and practitioners a step-by-step WALRUS-training. This course contains explanations about model elements and its advantages and limitations, as well as hands-on exercises to learn how to use WALRUS. All code, course, literature and examples will be collected on a dedicated website, which can be found via www.wageningenur.nl/hwm. References C.C. Brauer, et al. (2014a). Geosci. Model Dev. Discuss., 7, 1357—1411. C.C. Brauer, et al. (2014b). Hydrol. Earth Syst. Sci. Discuss., 11, 2091—2148.

  15. Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.

    PubMed

    Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K

    2017-07-01

    Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.

  16. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    NASA Astrophysics Data System (ADS)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  17. Analysis of aggregated tick returns: Evidence for anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Weber, Philipp

    2007-01-01

    In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.

  18. A computational kinetic model of diffusion for molecular systems.

    PubMed

    Teo, Ivan; Schulten, Klaus

    2013-09-28

    Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.

  19. A simple inertial formulation of the shallow water equations for efficient two-dimensional flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Bates, Paul D.; Horritt, Matthew S.; Fewtrell, Timothy J.

    2010-06-01

    SummaryThis paper describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models where flows in the x and y Cartesian directions are decoupled. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to simulation results from the diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ˜1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1/Δ x) 2, the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1/Δ x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. For the tests reported in this paper the maximum speed up achieved over a diffusive storage cell model was 1120×, although the actual value seen will depend on model resolution and water surface gradient. Solutions using the new equation set are shown to be grid-independent for the conditions considered and to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.

  20. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  1. Electromagnetic ray tracing model for line structures.

    PubMed

    Tan, C B; Khoh, A; Yeo, S H

    2008-03-17

    In this paper, a model for electromagnetic scattering of line structures is established based on high frequency approximation approach - ray tracing. This electromagnetic ray tracing (ERT) model gives the advantage of identifying each physical field that contributes to the total solution of the scattering phenomenon. Besides the geometrical optics field, different diffracted fields associated with the line structures are also discussed and formulated. A step by step addition of each electromagnetic field is given to elucidate the causes of a disturbance in the amplitude profile. The accuracy of the ERT model is also discussed by comparing with the reference finite difference time domain (FDTD) solution, which shows a promising result for a single polysilicon line structure with width of as narrow as 0.4 wavelength.

  2. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  3. Metric projection for dynamic multiplex networks.

    PubMed

    Jurman, Giuseppe

    2016-08-01

    Evolving multiplex networks are a powerful model for representing the dynamics along time of different phenomena, such as social networks, power grids, biological pathways. However, exploring the structure of the multiplex network time series is still an open problem. Here we propose a two-step strategy to tackle this problem based on the concept of distance (metric) between networks. Given a multiplex graph, first a network of networks is built for each time step, and then a real valued time series is obtained by the sequence of (simple) networks by evaluating the distance from the first element of the series. The effectiveness of this approach in detecting the occurring changes along the original time series is shown on a synthetic example first, and then on the Gulf dataset of political events.

  4. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  5. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  6. Nonstationary porosity evolution in mixing zone in coastal carbonate aquifer using an alternative modeling approach.

    PubMed

    Laabidi, Ezzeddine; Bouhlila, Rachida

    2015-07-01

    In the last few decades, hydrogeochemical problems have benefited from the strong interest in numerical modeling. One of the most recognized hydrogeochemical problems is the dissolution of the calcite in the mixing zone below limestone coastal aquifer. In many works, this problem has been modeled using a coupling algorithm between a density-dependent flow model and a geochemical model. A related difficulty is that, because of the high nonlinearity of the coupled set of equations, high computational effort is needed. During calcite dissolution, an increase in permeability can be identified, which can induce an increase in the penetration of the seawater into the aquifer. The majority of the previous studies used a fully coupled reactive transport model in order to model such problem. Romanov and Dreybrodt (J Hydrol 329:661-673, 2006) have used an alternative approach to quantify the porosity evolution in mixing zone below coastal carbonate aquifer at steady state. This approach is based on the analytic solution presented by Phillips (1991) in his book Flow and Reactions in Permeable Rock, which shows that it is possible to decouple the complex set of equation. This equation is proportional to the square of the salinity gradient, which can be calculated using a density driven flow code and to the reaction rate that can be calculated using a geochemical code. In this work, this equation is used in nonstationary step-by-step regime. At each time step, the quantity of the dissolved calcite is quantified, the change of porosity is calculated, and the permeability is updated. The reaction rate, which is the second derivate of the calcium equilibrium concentration in the equation, is calculated using the PHREEQC code (Parkhurst and Apello 1999). This result is used in GEODENS (Bouhlila 1999; Bouhlila and Laabidi 2008) to calculate change of the porosity after calculating the salinity gradient. For the next time step, the same protocol is used but using the updated porosity and permeability distributions.

  7. Signatures of ecological processes in microbial community time series.

    PubMed

    Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie

    2018-06-28

    Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.

  8. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal or less than 300x300x300 nodes, and it under-samples the wavefield reducing the number of stored time-steps by an order of magnitude. For bigger models the wavefield is stored only at the boundaries of the model and then re-injected while the residuals are backpropagated allowing to compute the correlation 'on the fly'. In terms of computational resource, the elastic code is an order of magnitude more demanding than the equivalent acoustic code. We have combined shared memory with distributed memory parallelisation using OpenMP and MPI respectively. Thus, we take advantage of the increasingly common multi-core architecture processors. We have successfully applied our inversion algorithm to different realistic complex 3D models. The models had non-linear relations between pressure and shear wave velocities. The shorter wavelengths of the shear waves improve the resolution of the images obtained with respect to a purely acoustic approach.

  9. Zero-Inertial Recession for a Kinematic Wave Model

    USDA-ARS?s Scientific Manuscript database

    Kinematic-wave models of surface irrigation assume a fixed relationship between depth and discharge (typically, normal depth). When surface irrigation inflow is cut off, the calculated upstream flow depth goes to zero, since the discharge is zero. For short time steps, use of the Kinematic Wave mode...

  10. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  11. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  12. Extravascular transport in normal and tumor tissues.

    PubMed

    Jain, R K; Gerlowski, L E

    1986-01-01

    The transport characteristics of the normal and tumor tissue extravascular space provide the basis for the determination of the optimal dosage and schedule regimes of various pharmacological agents in detection and treatment of cancer. In order for the drug to reach the cellular space where most therapeutic action takes place, several transport steps must first occur: (1) tissue perfusion; (2) permeation across the capillary wall; (3) transport through interstitial space; and (4) transport across the cell membrane. Any of these steps including intracellular events such as metabolism can be the rate-limiting step to uptake of the drug, and these rate-limiting steps may be different in normal and tumor tissues. This review examines these transport limitations, first from an experimental point of view and then from a modeling point of view. Various types of experimental tumor models which have been used in animals to represent human tumors are discussed. Then, mathematical models of extravascular transport are discussed from the prespective of two approaches: compartmental and distributed. Compartmental models lump one or more sections of a tissue or body into a "compartment" to describe the time course of disposition of a substance. These models contain "effective" parameters which represent the entire compartment. Distributed models consider the structural and morphological aspects of the tissue to determine the transport properties of that tissue. These distributed models describe both the temporal and spatial distribution of a substance in tissues. Each of these modeling techniques is described in detail with applications for cancer detection and treatment in mind.

  13. Time spent in sedentary posture is associated with waist circumference and cardiovascular risk.

    PubMed

    Tigbe, W W; Granat, M H; Sattar, N; Lean, M E J

    2017-05-01

    The relationship between metabolic risk and time spent sitting, standing and stepping has not been well established. The present study aimed to determine associations of objectively measured time spent siting, standing and stepping, with coronary heart disease (CHD) risk. A cross-sectional study of healthy non-smoking Glasgow postal workers, n=111 (55 office workers, 5 women, and 56 walking/delivery workers, 10 women), who wore activPAL physical activity monitors for 7 days. Cardiovascular risks were assessed by metabolic syndrome categorisation and 10-year PROCAM (prospective cardiovascular Munster) risk. Mean (s.d.) age was 40 (8) years, body mass index 26.9 (3.9) kg m -2 and waist circumference 95.4 (11.9) cm. Mean (s.d.) high-density lipoprotein cholesterol (HDL cholesterol) 1.33 (0.31), low-density lipoprotein cholesterol 3.11 (0.87), triglycerides 1.23 (0.64) mmol l -1 and 10-year PROCAM risk 1.8 (1.7)%. The participants spent mean (s.d.) 9.1 (1.8) h per day sedentary, 7.6 (1.2) h per day sleeping, 3.9 (1.1) h per day standing and 3.3 (0.9) h per day stepping, accumulating 14 708 (4984) steps per day in 61 (25) sit-to-stand transitions per day. In univariate regressions-adjusting for age, sex, family history of CHD, shift worked, job type and socioeconomic status-waist circumference (P=0.005), fasting triglycerides (P=0.002), HDL cholesterol (P=0.001) and PROCAM risk (P=0.047) were detrimentally associated with sedentary time. These associations remained significant after further adjustment for sleep, standing and stepping in stepwise regression models. However, after further adjustment for waist circumference, the associations were not significant. Compared with those without the metabolic syndrome, participants with the metabolic syndrome were significantly less active-fewer steps, shorter stepping duration and longer time sitting. Those with no metabolic syndrome features walked >15 000 steps per day or spent >7 h per day upright. Longer time spent in sedentary posture is significantly associated with higher CHD risk and larger waist circumference.

  14. Where did the time go? Friction evolves with slip following large velocity steps, normal stress steps, and (?) during long holds

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bhattacharya, P.; Tullis, T. E.; Okazaki, K.; Beeler, N. M.

    2016-12-01

    The popular constitutive formulations of rate-and-state friction offer two end-member views on whether friction evolves only with slip (Slip law state evolution) or with time even without slip (Aging law state evolution). While rate stepping experiments show support for the Slip law, laboratory observed frictional behavior of initially bare rock surfaces near zero slip rate has traditionally been interpreted to show support for time-dependent evolution of frictional strength. Such laboratory derived support for time-dependent evolution has been one of the motivations behind the Aging law being widely used to model earthquake cycles on natural faults.Through a combination of theoretical results and new experimental data on initially bare granite, we show stronger support for the other end member view, i.e. that friction under a wide range of sliding conditions evolves only with slip. Our dataset is unique in that it combines up to 3.5 orders of magnitude rate steps, sequences of holds up to 10000s, and 5% normal stress steps at order of magnitude different sliding rates during the same experimental run. The experiments were done on the Brown rotary shear apparatus using servo feedback, making the machine stiff enough to provide very large departures from steady-state while maintaining stable, quasi-static sliding. Across these diverse sliding conditions, and in particular for both large velocity step decreases and the longest holds, the data are much more consistent with the Slip law version of slip-dependence than the time-dependence formulated in the Aging law. The shear stress response to normal stress steps is also consistently better explained by the Slip law when paired with the Linker-Dieterich type response to normal stress perturbations. However, the remarkable symmetry and slip-dependence of the normal stress step increases and decreases suggest deficiencies in the Linker-Dieterich formulation that we will probe in future experiments.High quality measurements of interface compaction from the normal-stress steps suggest that the instantaneous changes in state and contact area are opposite in sign, indicating that state evolution might be fundamentally connected to contact quality, and not quantity alone.

  15. Application of global kinetic models to HMX beta-delta transition and cookoff processes.

    PubMed

    Wemhoff, Aaron P; Burnham, Alan K; Nichols, Albert L

    2007-03-08

    The reduction of the number of reactions in kinetic models for both the HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia instrumented thermal ignition (SITI) and scaled thermal explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on one-dimensional time to explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as well with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multistep Arrhenius model and can contain up to 90% fewer chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from differential scanning calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multistep Arrhenius approach, and up to 11% using a Prout-Tompkins cookoff model.

  16. Evaluating the effectiveness of organisational-level strategies with or without an activity tracker to reduce office workers' sitting time: a cluster-randomised trial.

    PubMed

    Brakenridge, C L; Fjeldsoe, B S; Young, D C; Winkler, E A H; Dunstan, D W; Straker, L M; Healy, G N

    2016-11-04

    Office workers engage in high levels of sitting time. Effective, context-specific, and scalable strategies are needed to support widespread sitting reduction. This study aimed to evaluate organisational-support strategies alone or in combination with an activity tracker to reduce sitting in office workers. From one organisation, 153 desk-based office workers were cluster-randomised (by team) to organisational support only (e.g., manager support, emails; 'Group ORG', 9 teams, 87 participants), or organisational support plus LUMOback activity tracker ('Group ORG + Tracker', 9 teams, 66 participants). The waist-worn tracker provided real-time feedback and prompts on sitting and posture. ActivPAL3 monitors were used to ascertain primary outcomes (sitting time during work- and overall hours) and other activity outcomes: prolonged sitting time (≥30 min bouts), time between sitting bouts, standing time, stepping time, and number of steps. Health and work outcomes were assessed by questionnaire. Changes within each group (three- and 12 months) and differences between groups were analysed by linear mixed models. Missing data were multiply imputed. At baseline, participants (46 % women, 23-58 years) spent (mean ± SD) 74.3 ± 9.7 % of their workday sitting, 17.5 ± 8.3 % standing and 8.1 ± 2.7 % stepping. Significant (p < 0.05) reductions in sitting time (both work and overall) were observed within both groups, but only at 12 months. For secondary activity outcomes, Group ORG significantly improved in work prolonged sitting, time between sitting bouts and standing time, and overall prolonged sitting time (12 months), and in overall standing time (three- and 12 months); while Group ORG + Tracker, significantly improved in work prolonged sitting, standing, stepping and overall standing time (12 months). Adjusted for confounders, the only significant between-group differences were a greater stepping time and step count for Group ORG + Tracker relative to Group ORG (+20.6 min/16 h day, 95 % CI: 3.1, 38.1, p = 0.021; +846.5steps/16 h day, 95 % CI: 67.8, 1625.2, p = 0.033) at 12 months. Observed changes in health and work outcomes were small and not statistically significant. Organisational-support strategies with or without an activity tracker resulted in improvements in sitting, prolonged sitting and standing; adding a tracker enhanced stepping changes. Improvements were most evident at 12 months, suggesting the organisational-support strategies may have taken time to embed within the organisation. Australian New Zealand Clinical Trial Registry: ACTRN12614000252617 . Registered 10 March 2014.

  17. Index extraction for electromagnetic field evaluation of high power wireless charging system

    PubMed Central

    2017-01-01

    This paper presents the precise dosimetry for highly resonant wireless power transfer (HR-WPT) system using an anatomically realistic human voxel model. The dosimetry for the HR-WPT system designed to operate at 13.56 MHz frequency, which one of the ISM band frequency band, is conducted in the various distances between the human model and the system, and in the condition of alignment and misalignment between transmitting and receiving circuits. The specific absorption rates in the human body are computed by the two-step approach; in the first step, the field generated by the HR-WPT system is calculated and in the second step the specific absorption rates are computed with the scattered field finite-difference time-domain method regarding the fields obtained in the first step as the incident fields. The safety compliance for non-uniform field exposure from the HR-WPT system is discussed with the international safety guidelines. Furthermore, the coupling factor concept is employed to relax the maximum allowable transmitting power. Coupling factors derived from the dosimetry results are presented. In this calculation, the external magnetic field from the HR-WPT system can be relaxed by approximately four times using coupling factor in the worst exposure scenario. PMID:28708840

  18. Investigation of gene expressions in differentiated cell derived bone marrow stem cells during bone morphogenetic protein-4 treatments with Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Zafari, Jaber; Jouni, Fatemeh Javani; Ahmadvand, Ali; Abdolmaleki, Parviz; Soodi, Malihe; Zendehdel, Rezvan

    2017-02-01

    A model was set up to predict the differentiation patterns based on the data extracted from FTIR spectroscopy. For this reason, bone marrow stem cells (BMSCs) were differentiated to primordial germ cells (PGCs). Changes in cellular macromolecules in the time of 0, 24, 48, 72, and 96 h of differentiation, as different steps of the differentiation procedure were investigated by using FTIR spectroscopy. Also, the expression of pluripotency (Oct-4, Nanog and c-Myc) and specific genes (Mvh, Stella and Fragilis) were investigated by real-time PCR. However, the expression of genes in five steps of differentiation was predicted by FTIR spectroscopy. FTIR spectra showed changes in the template of band intensities at different differentiation steps. There are increasing changes in the stepwise differentiation procedure for the ratio area of CH2, which is symmetric to CH2 asymmetric stretching. An ensemble of expert methods, including regression tree (RT), boosting algorithm (BA), and generalized regression neural network (GRNN), was the best method to predict the gene expression by FTIR spectroscopy. In conclusion, the model was able to distinguish the pattern of different steps from cell differentiation by using some useful features extracted from FTIR spectra.

  19. Rainfall, streamflow, and peak stage data collected at the Murfreesboro, Tennessee, gaging network, March 1989 through July 1992

    USGS Publications Warehouse

    Outlaw, G.S.; Butner, D.E.; Kemp, R.L.; Oaks, A.T.; Adams, G.S.

    1992-01-01

    Rainfall, stage, and streamflow data in the Murfreesboro area, Middle Tennessee, were collected from March 1989 through July 1992 from a network of 68 gaging stations. The network consists of 10 tipping-bucket rain gages, 2 continuous-record streamflow gages, 4 partial-record flood hydrograph gages, and 72 crest-stage gages. Data collected by the gages includes 5minute time-step rainfall hyetographs, 15-minute time-step flood hydrographs, and peak-stage elevations. Data are stored in a computer data base and are available for many computer modeling and engineering applications.

  20. Accelerated simulations of aromatic polymers: application to polyether ether ketone (PEEK)

    NASA Astrophysics Data System (ADS)

    Broadbent, Richard J.; Spencer, James S.; Mostofi, Arash A.; Sutton, Adrian P.

    2014-10-01

    For aromatic polymers, the out-of-plane oscillations of aromatic groups limit the maximum accessible time step in a molecular dynamics simulation. We present a systematic approach to removing such high-frequency oscillations from planar groups along aromatic polymer backbones, while preserving the dynamical properties of the system. We consider, as an example, the industrially important polymer, polyether ether ketone (PEEK), and show that this coarse graining technique maintains excellent agreement with the fully flexible all-atom and all-atom rigid bond models whilst allowing the time step to increase fivefold to 5 fs.

  1. Global phenomena from local rules: Peer-to-peer networks and crystal steps

    NASA Astrophysics Data System (ADS)

    Finkbiner, Amy

    Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.

  2. An information-carrying and knowledge-producing molecular machine. A Monte-Carlo simulation.

    PubMed

    Kuhn, Christoph

    2012-02-01

    The concept called Knowledge is a measure of the quality of genetically transferred information. Its usefulness is demonstrated quantitatively in a Monte-Carlo simulation on critical steps in a origin of life model. The model describes the origin of a bio-like genetic apparatus by a long sequence of physical-chemical steps: it starts with the presence of a self-replicating oligomer and a specifically structured environment in time and space that allow for the formation of aggregates such as assembler-hairpins-devices and, at a later stage, an assembler-hairpins-enzyme device-a first translation machine.

  3. NAM - Eta to NMM conversion 20060613

    Science.gov Websites

    North American Mesoscale (NAM) time slot. Currently the the Eta forecast model is used for the NAM, but Eta and its output will be available at the same time. There will be some minor addition of products MB per file (time step). - The following fields will be added: - omega @ 10, 20, & 30 mb - height

  4. USGS perspectives on an integrated approach to watershed and coastal management

    USGS Publications Warehouse

    Larsen, Matthew C.; Hamilton, Pixie A.; Haines, John W.; Mason, Jr., Robert R.

    2010-01-01

    The writers discuss three critically important steps necessary for achieving the goal for improved integrated approaches on watershed and coastal protection and management. These steps involve modernization of monitoring networks, creation of common data and web services infrastructures, and development of modeling, assessment, and research tools. Long-term monitoring is needed for tracking the effectiveness approaches for controlling land-based sources of nutrients, contaminants, and invasive species. The integration of mapping and monitoring with conceptual and mathematical models, and multidisciplinary assessments is important in making well-informed decisions. Moreover, a better integrated data network is essential for mapping, statistical, and modeling applications, and timely dissemination of data and information products to a broad community of users.

  5. Single-particle stochastic heat engine.

    PubMed

    Rana, Shubhashis; Pal, P S; Saha, Arnab; Jayannavar, A M

    2014-10-01

    We have performed an extensive analysis of a single-particle stochastic heat engine constructed by manipulating a Brownian particle in a time-dependent harmonic potential. The cycle consists of two isothermal steps at different temperatures and two adiabatic steps similar to that of a Carnot engine. The engine shows qualitative differences in inertial and overdamped regimes. All the thermodynamic quantities, including efficiency, exhibit strong fluctuations in a time periodic steady state. The fluctuations of stochastic efficiency dominate over the mean values even in the quasistatic regime. Interestingly, our system acts as an engine provided the temperature difference between the two reservoirs is greater than a finite critical value which in turn depends on the cycle time and other system parameters. This is supported by our analytical results carried out in the quasistatic regime. Our system works more reliably as an engine for large cycle times. By studying various model systems, we observe that the operational characteristics are model dependent. Our results clearly rule out any universal relation between efficiency at maximum power and temperature of the baths. We have also verified fluctuation relations for heat engines in time periodic steady state.

  6. Real time implementation and control validation of the wind energy conversion system

    NASA Astrophysics Data System (ADS)

    Sattar, Adnan

    The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.

  7. Climate change or climate cycles? Snowpack trends in the Olympic and Cascade Mountains, Washington, USA.

    PubMed

    Barry, Dwight; McDonald, Shea

    2013-01-01

    Climate change could significantly influence seasonal streamflow and water availability in the snowpack-fed watersheds of Washington, USA. Descriptions of snowpack decline often use linear ordinary least squares (OLS) models to quantify this change. However, the region's precipitation is known to be related to climate cycles. If snowpack decline is more closely related to these cycles, an OLS model cannot account for this effect, and thus both descriptions of trends and estimates of decline could be inaccurate. We used intervention analysis to determine whether snow water equivalent (SWE) in 25 long-term snow courses within the Olympic and Cascade Mountains are more accurately described by OLS (to represent gradual change), stationary (to represent no change), or step-stationary (to represent climate cycling) models. We used Bayesian information-theoretic methods to determine these models' relative likelihood, and we found 90 models that could plausibly describe the statistical structure of the 25 snow courses' time series. Posterior model probabilities of the 29 "most plausible" models ranged from 0.33 to 0.91 (mean = 0.58, s = 0.15). The majority of these time series (55%) were best represented as step-stationary models with a single breakpoint at 1976/77, coinciding with a major shift in the Pacific Decadal Oscillation. However, estimates of SWE decline differed by as much as 35% between statistically plausible models of a single time series. This ambiguity is a critical problem for water management policy. Approaches such as intervention analysis should become part of the basic analytical toolkit for snowpack or other climatic time series data.

  8. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  9. Event time analysis of longitudinal neuroimage data.

    PubMed

    Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce

    2014-08-15

    This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  11. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  12. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  13. Study on the adsorption of nitrogen and phosphorus from biogas slurry by NaCl-modified zeolite

    PubMed Central

    Cheng, Qunpeng; Li, Hongxia; Xu, Yilu; Chen, Song; Liao, Yuhua; Deng, Fang; Li, Jianfen

    2017-01-01

    A NaCl-modified zeolite was used to simultaneously remove nitrogen and phosphate from biogas slurry. The effect of pH, contact time and dosage of absorbants on the removal efficiency of nitrogen and phosphate were studied. The results showed that the highest removal efficiency of NH4+-N (92.13%) and PO43−-P (90.3%) were achieved at pH 8. While the zeolite doses ranged from 0.5 to 5 g/100 ml, NH4+-N and PO43−-P removal efficiencies ranged from 5.19% to 94.94% and 72.16% to 91.63% respectively. The adsorption isotherms of N and P removal with NaCl-modified zeolite were well described by Langmuir models, suggesting the homogeneous sorption mechanisms. While through intra-particle diffusion model to analyze the influence of contact time, it showed that the adsorption process of NH4+-N and PO43−-P followed the second step of intra-particle diffusion model. The surface diffusion adsorption step was very fast which was finished in a short time. PMID:28542420

  14. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  15. Development of a transient, lumped hydrologic model for geomorphologic units in a geomorphology based rainfall-runoff modelling framework

    NASA Astrophysics Data System (ADS)

    Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.

    2010-05-01

    We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.

  16. The STEP model: Characterizing simultaneous time effects on practice for flight simulator performance among middle-aged and older pilots.

    PubMed

    Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C

    2015-09-01

    Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real-world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real-world tasks. In this study, we took advantage of existing practice data from 5 simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by U.S. Federal Aviation Administration proficiency ratings). We developed a new Simultaneous Time Effects on Practice (STEP) model: (a) to model the simultaneous effects of practice and interval on performance of the 5 flights, and (b) to examine the effects of selected covariates (i.e., age, flight expertise, and 3 composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intraindividual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with practice or interval. Results indicated that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high-functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real-world tasks. (c) 2015 APA, all rights reserved).

  17. SWIFT MODELLER: a Java based GUI for molecular modeling.

    PubMed

    Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S

    2011-10-01

    MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.

  18. Earthquake models using rate and state friction and fast multipoles

    NASA Astrophysics Data System (ADS)

    Tullis, T.

    2003-04-01

    The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.

  19. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  20. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  1. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  2. Temporal rainfall disaggregation using a multiplicative cascade model for spatial application in urban hydrology

    NASA Astrophysics Data System (ADS)

    Müller, H.; Haberlandt, U.

    2018-01-01

    Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.

  3. Reverse-Time Imaging Based on Full-Waveform Inverted Velocity Model for Nondestructive Testing of Heterogeneous Engineered Structures

    NASA Astrophysics Data System (ADS)

    Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.

    2017-12-01

    Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.

  4. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  5. 75 FR 75961 - Notice of Implementation of the Wind Erosion Prediction System for Soil Erodibility System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...

  6. WebStart WEPS: Remote data access and model execution functionality added to WEPS

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a daily time step, process based wind erosion model developed by the United States Department of Agriculture - Agricultural Research Service (USDA-ARS). WEPS simulates climate and management driven changes to the surface/vegetation/soil state on a daily b...

  7. Strengthening the working alliance through a clinician's familiarity with the 12-step approach.

    PubMed

    Dennis, Cory B; Roland, Brian D; Loneck, Barry M

    2018-01-01

    The working alliance plays an important role in the substance use disorder treatment process. Many substance use disorder treatment providers incorporate the 12-Step approach to recovery into treatment. With the 12-Step approach known among many clients and clinicians, it may well factor into the therapeutic relationship. We investigated how, from the perspective of clients, a clinician's level of familiarity with and in-session time spent on the 12-Step approach might affect the working alliance between clients and clinicians, including possible differences based on a clinician's recovery status. We conducted a secondary study using data from 180 clients and 31 clinicians. Approximately 81% of client participants were male, and approximately 65% of clinician participants were female. We analyzed data with Stata using a population-averaged model. From the perspective of clients with a substance use disorder, clinicians' familiarity with the 12-Step approach has a positive relationship with the working alliance. The client-estimated amount of in-session time spent on the 12-Step approach did not have a statistically significant effect on ratings of the working alliance. A clinician's recovery status did not moderate the relationship between 12-Step familiarity and the working alliance. These results suggest that clinicians can influence, in part, how their clients perceive the working alliance by being familiar with the 12-Step approach. This might be particularly salient for clinicians who provide substance use disorder treatment at agencies that incorporate, on some level, the 12-Step approach to recovery.

  8. Real-time control of walking using recordings from dorsal root ganglia.

    PubMed

    Holinski, B J; Everaert, D G; Mushahwar, V K; Stein, R B

    2013-10-01

    The goal of this study was to decode sensory information from the dorsal root ganglia (DRG) in real time, and to use this information to adapt the control of unilateral stepping with a state-based control algorithm consisting of both feed-forward and feedback components. In five anesthetized cats, hind limb stepping on a walkway or treadmill was produced by patterned electrical stimulation of the spinal cord through implanted microwire arrays, while neuronal activity was recorded from the DRG. Different parameters, including distance and tilt of the vector between hip and limb endpoint, integrated gyroscope and ground reaction force were modelled from recorded neural firing rates. These models were then used for closed-loop feedback. Overall, firing-rate-based predictions of kinematic sensors (limb endpoint, integrated gyroscope) were the most accurate with variance accounted for >60% on average. Force prediction had the lowest prediction accuracy (48 ± 13%) but produced the greatest percentage of successful rule activations (96.3%) for stepping under closed-loop feedback control. The prediction of all sensor modalities degraded over time, with the exception of tilt. Sensory feedback from moving limbs would be a desirable component of any neuroprosthetic device designed to restore walking in people after a spinal cord injury. This study provides a proof-of-principle that real-time feedback from the DRG is possible and could form part of a fully implantable neuroprosthetic device with further development.

  9. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    PubMed

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  10. Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states

    NASA Astrophysics Data System (ADS)

    Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.

    1999-02-01

    The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.

  11. Modeling and optimization of Look-Locker spin labeling for measuring perfusion and transit time changes in activation studies taking into account arterial blood volume.

    PubMed

    Francis, S T; Bowtell, R; Gowland, P A

    2008-02-01

    This work describes a new compartmental model with step-wise temporal analysis for a Look-Locker (LL)-flow-sensitive alternating inversion-recovery (FAIR) sequence, which combines the FAIR arterial spin labeling (ASL) scheme with a LL echo planar imaging (EPI) measurement, using a multireadout EPI sequence for simultaneous perfusion and T*(2) measurements. The new model highlights the importance of accounting for the transit time of blood through the arteriolar compartment, delta, in the quantification of perfusion. The signal expected is calculated in a step-wise manner to avoid discontinuities between different compartments. The optimal LL-FAIR pulse sequence timings for the measurement of perfusion with high signal-to-noise ratio (SNR), and high temporal resolution at 1.5, 3, and 7T are presented. LL-FAIR is shown to provide better SNR per unit time compared to standard FAIR. The sequence has been used experimentally for simultaneous monitoring of perfusion, transit time, and T*(2) changes in response to a visual stimulus in four subjects. It was found that perfusion increased by 83 +/- 4% on brain activation from a resting state value of 94 +/- 13 ml/100 g/min, while T*(2) increased by 3.5 +/- 0.5%. (c) 2008 Wiley-Liss, Inc.

  12. N-body simulations of star clusters

    NASA Astrophysics Data System (ADS)

    Engle, Kimberly Anne

    1999-10-01

    We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.

  13. Void Growth and Coalescence Simulations

    DTIC Science & Technology

    2013-08-01

    distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we

  14. Investigation of Macroscopic Brittle Creep Failure Caused by Microcrack Growth Under Step Loading and Unloading in Rocks

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhao; Shao, Zhushan

    2016-07-01

    The growth of subcritical cracks plays an important role in the creep of brittle rock. The stress path has a great influence on creep properties. A micromechanics-based model is presented to study the effect of the stress path on creep properties. The microcrack model of Ashby and Sammis, Charles' Law, and a new micro-macro relation are employed in our model. This new micro-macro relation is proposed by using the correlation between the micromechanical and macroscopic definition of damage. A stress path function is also introduced by the relationship between stress and time. Theoretical expressions of the stress-strain relationship and creep behavior are derived. The effects of confining pressure on the stress-strain relationship are studied. Crack initiation stress and peak stress are achieved under different confining pressures. The applied constant stress that could cause creep behavior is predicted. Creep properties are studied under the step loading of axial stress or the unloading of confining pressure. Rationality of the micromechanics-based model is verified by the experimental results of Jinping marble. Furthermore, the effects of model parameters and the unloading rate of confining pressure on creep behavior are analyzed. The coupling effect of step axial stress and confining pressure on creep failure is also discussed. The results provide implications on the deformation behavior and time-delayed rockburst mechanism caused by microcrack growth on surrounding rocks during deep underground excavations.

  15. Evaluation of TOPLATS on three Mediterranean catchments

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2016-08-01

    Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.

  16. Application of a nonrandomized stepped wedge design to evaluate an evidence-based quality improvement intervention: a proof of concept using simulated data on patient-centered medical homes.

    PubMed

    Huynh, Alexis K; Lee, Martin L; Farmer, Melissa M; Rubenstein, Lisa V

    2016-10-21

    Stepped wedge designs have gained recognition as a method for rigorously assessing implementation of evidence-based quality improvement interventions (QIIs) across multiple healthcare sites. In theory, this design uses random assignment of sites to successive QII implementation start dates based on a timeline determined by evaluators. However, in practice, QII timing is often controlled more by site readiness. We propose an alternate version of the stepped wedge design that does not assume the randomized timing of implementation while retaining the method's analytic advantages and applying to a broader set of evaluations. To test the feasibility of a nonrandomized stepped wedge design, we developed simulated data on patient care experiences and on QII implementation that had the structures and features of the expected data from a planned QII. We then applied the design in anticipation of performing an actual QII evaluation. We used simulated data on 108,000 patients to model nonrandomized stepped wedge results from QII implementation across nine primary care sites over 12 quarters. The outcome we simulated was change in a single self-administered question on access to care used by Veterans Health Administration (VA), based in the United States, as part of its quarterly patient ratings of quality of care. Our main predictors were QII exposure and time. Based on study hypotheses, we assigned values of 4 to 11 % for improvement in access when sites were first exposed to implementation and 1 to 3 % improvement in each ensuing time period thereafter when sites continued with implementation. We included site-level (practice size) and respondent-level (gender, race/ethnicity) characteristics that might account for nonrandomized timing in site implementation of the QII. We analyzed the resulting data as a repeated cross-sectional model using HLM 7 with a three-level hierarchical data structure and an ordinal outcome. Levels in the data structure included patient ratings, timing of adoption of the QII, and primary care site. We were able to demonstrate a statistically significant improvement in adoption of the QII, as postulated in our simulation. The linear time trend while sites were in the control state was not significant, also as expected in the real life scenario of the example QII. We concluded that the nonrandomized stepped wedge design was feasible within the parameters of our planned QII with its data structure and content. Our statistical approach may be applicable to similar evaluations.

  17. Modeling the Cienega de Santa Clara, Sonora, Mexico

    NASA Astrophysics Data System (ADS)

    Huckelbridge, K. H.; Hidalgo, H.; Dracup, J.; Ibarra Obando, S. E.

    2002-12-01

    The Cienega de Santa Clara is a created wetland located in the Colorado River Delta (CRD), in Sonora, Mexico. It is sustained by agricultural return flows from the Wellton-Mohawk Irrigation District in Arizona and the Mexicali Valley in Mexico. As one of the few wetlands remaining in the CRD, it provides critical habitat for several species of fish and birds, including several endangered species such as the desert pupfish (Cyprinodon macularius) and the Yuma clapper rail (Rallus longirostris yumanensis). However, this habitat may be in jeopardy if the quantity and quality of the agricultural inflows are significantly altered. This study seeks to develop a model that describes the dynamics of wetland hydrology, vegetation, and water quality as a function of inflow variability and salinity loading. The model is divided into four modules set up in sequence. For a given time step, the sequence begins with the first module, which utilizes basic diffusion equations to simulate mixing processes in the shallow wetland when the flow and concentration of the inflow deviate from the baseline. The second module develops a vegetated-area response to the resulting distribution of salinity in the wetland. Using the new area of vegetation cover determined by the second module and various meteorological variables, the third module calculates the evapotranspiration rate for the wetland, using the Penman-Montieth equation. Finally, the fourth module takes the overall evapotranspiration rate, along with precipitation, inflow and outflow and calculates the new volume of the wetland using a water balance. This volume then establishes the initial variables for the next time step. The key outputs from the model are salinity concentration, area of vegetation cover, and wetland volume for each time step. Results from this model will illustrate how the wetland's hydrology, vegetation, and water quality are altered over time under various inflow scenarios. These outputs can ultimately be used to assess the impacts to wetland wildlife and overall ecosystem health, and to determine the best management strategy for the Cienega de Santa Clara.

  18. A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags

    NASA Astrophysics Data System (ADS)

    Meng, S.; Xie, X.

    2015-12-01

    In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.

  19. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  20. Development of a semi-automated model identification and calibration tool for conceptual modelling of sewer systems.

    PubMed

    Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick

    2013-01-01

    Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.

  1. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    PubMed

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Precision Timed Infrastructure: Design Challenges

    DTIC Science & Technology

    2013-09-19

    timing constructs Clock synchronization and communication PRET Machines Other Platforms Fig. 1. Conceptual overview of translation steps between...2002. [3] A. Benveniste and G. Berry. The Synchronous Approach to Reactive and Real- Time Systems. Proceedings of the IEEE, 79(9):1270–1282, 1991. [4] D...and E. Lee. A programming model for time - synchronized distributed real- time systems. In Real Time and Embedded Technology and Applications Symposium, 2007. RTAS’07. 13th IEEE, pages

  3. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  4. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  5. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  6. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    NASA Astrophysics Data System (ADS)

    Bruns, Tim M.; Wagenaar, Joost B.; Bauman, Matthew J.; Gaunt, Robert A.; Weber, Douglas J.

    2013-04-01

    Objective. Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach. We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results. Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance. This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability.

  7. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    PubMed Central

    Bruns, Tim M; Wagenaar, Joost B; Bauman, Matthew J; Gaunt, Robert A; Weber, Douglas J

    2013-01-01

    Objective Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability. PMID:23503062

  8. On the performance of updating Stochastic Dynamic Programming policy using Ensemble Streamflow Prediction in a snow-covered region

    NASA Astrophysics Data System (ADS)

    Martin, A.; Pascal, C.; Leconte, R.

    2014-12-01

    Stochastic Dynamic Programming (SDP) is known to be an effective technique to find the optimal operating policy of hydropower systems. In order to improve the performance of SDP, this project evaluates the impact of re-updating the policy at every time step by using Ensemble Streamflow Prediction (ESP). We present a case study of the Kemano's hydropower system on the Nechako River in British Columbia, Canada. Managed by Rio Tinto Alcan (RTA), this system is subject to large streamflow volumes in spring due to important amount of snow depth during the winter season. Therefore, the operating policy should not only maximize production but also minimize the risk of flooding. The hydrological behavior of the system is simulated with CEQUEAU, a distributed and deterministic hydrological model developed by the Institut national de la recherche scientifique - Eau, Terre et Environnement (INRS-ETE) in Quebec, Canada. On each decision time step, CEQUEAU is used to generate ESP scenarios based on historical meteorological sequences and the current state of the hydrological model. These scenarios are used into the SDP to optimize the new release policy for the next time steps. This routine is then repeated over the entire simulation period. Results are compared with those obtained by using SDP on historical inflow scenarios.

  9. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  10. A cellular automata approach for modeling surface water runoff

    NASA Astrophysics Data System (ADS)

    Jozefik, Zoltan; Nanu Frechen, Tobias; Hinz, Christoph; Schmidt, Heiko

    2015-04-01

    This abstract reports the development and application of a two-dimensional cellular automata based model, which couples the dynamics of overland flow, infiltration processes and surface evolution through sediment transport. The natural hill slopes are represented by their topographic elevation and spatially varying soil properties infiltration rates and surface roughness coefficients. This model allows modeling of Hortonian overland flow and infiltration during complex rainfall events. An advantage of the cellular automata approach over the kinematic wave equations is that wet/dry interfaces that often appear with rainfall overland flows can be accurately captured and are not a source of numerical instabilities. An adaptive explicit time stepping scheme allows for rainfall events to be adequately resolved in time, while large time steps are taken during dry periods to provide for simulation run time efficiency. The time step is constrained by the CFL condition and mass conservation considerations. The spatial discretization is shown to be first-order accurate. For validation purposes, hydrographs for non-infiltrating and infiltrating plates are compared to the kinematic wave analytic solutions and data taken from literature [1,2]. Results show that our cellular automata model quantitatively accurately reproduces hydrograph patterns. However, recent works have showed that even through the hydrograph is satisfyingly reproduced, the flow field within the plot might be inaccurate [3]. For a more stringent validation, we compare steady state velocity, water flux, and water depth fields to rainfall simulation experiments conducted in Thies, Senegal [3]. Comparisons show that our model is able to accurately capture these flow properties. Currently, a sediment transport and deposition module is being implemented and tested. [1] M. Rousseau, O. Cerdan, O. Delestre, F. Dupros, F. James, S. Cordier. Overland flow modeling with the Shallow Water Equation using a well balanced numerical scheme: Adding efficiency or sum more complexity?. 2012. [2] Fritz R. Fiedler, J. A. Ramirez. A numerical method for simulating discontinuous shallow flow over an infiltrating surface. In. J. Numer. Mech. Fluids 200: 32: 219-240. [3] C. Mügler, O. Planchon, J. Patin, S. Weill, N. Silvera, P. Richard, E. Mouche. Comparison of Roughness models to simulate overland flow and tracer transport experiments under simulated rainfall at plot scale. Journal of Hydrology. 402 (2011) 25-40.

  11. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  12. A Flexible Framework Hydrological Informatic Modeling System - HIMS

    NASA Astrophysics Data System (ADS)

    WANG, L.; Wang, Z.; Changming, L.; Li, J.; Bai, P.

    2017-12-01

    Simulating water cycling process temporally and spatially fitting for the characteristics of the study area was important for floods prediction and streamflow simulation with high accuracy, as soil properties, land scape, climate, and land managements were the critical factors influencing the non-linear relationship of rainfall-runoff at watershed scales. Most existing hydrological models cannot simulate water cycle process at different places with customized mechanisms with fixed single structure and mode. This study develops Hydro-Informatic Modeling System (HIMS) model with modular of each critical hydrological process with multiple choices for various scenarios to solve this problem. HIMS has the structure accounting for two runoff generation mechanisms of infiltration excess and saturation excess and estimated runoff with different methods including Time Variance Gain Model (TVGM), LCM which has good performance at ungauged areas, besides the widely used Soil Conservation Service-Curve Number (SCS-CN) method. Channel routing model contains the most widely used Muskingum, and kinematic wave equation with new solving method. HIMS model performance with its symbolic runoff generation model LCM was evaluated through comparison with the observed streamflow datasets of Lasha river watershed at hourly, daily, and monthly time steps. Comparisons between simulational and obervational streamflows were found with NSE higher than 0.87 and WE within ±20%. Water balance analysis about precipitation, streamflow, actual evapotranspiration (ET), and soil moisture change was conducted temporally at annual time step and it has been proved that HIMS model performance was reliable through comparison with literature results at the Lhasa River watershed.

  13. Aging effect on step adjustments and stability control in visually perturbed gait initiation.

    PubMed

    Sun, Ruopeng; Cui, Chuyi; Shea, John B

    2017-10-01

    Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A cell-vertex multigrid method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Radespiel, R.

    1989-01-01

    A cell-vertex scheme for the Navier-Stokes equations, which is based on central difference approximations and Runge-Kutta time stepping, is described. Using local time stepping, implicit residual smoothing, a multigrid method, and carefully controlled artificial dissipative terms, very good convergence rates are obtained for a wide range of two- and three-dimensional flows over airfoils and wings. The accuracy of the code is examined by grid refinement studies and comparison with experimental data. For an accurate prediction of turbulent flows with strong separations, a modified version of the nonequilibrium turbulence model of Johnson and King is introduced, which is well suited for an implementation into three-dimensional Navier-Stokes codes. It is shown that the solutions for three-dimensional flows with strong separations can be dramatically improved, when a nonequilibrium model of turbulence is used.

  15. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  16. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  17. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  18. Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.

    PubMed

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.

  19. Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models

    PubMed Central

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179

  20. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  1. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  2. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  3. Modelling nematode movement using time-fractional dynamics.

    PubMed

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  4. Unexpected Reaction Pathway for butyrylcholinesterase-catalyzed inactivation of “hunger hormone” ghrelin

    NASA Astrophysics Data System (ADS)

    Yao, Jianzhuang; Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2016-02-01

    Extensive computational modeling and simulations have been carried out, in the present study, to uncover the fundamental reaction pathway for butyrylcholinesterase (BChE)-catalyzed hydrolysis of ghrelin, demonstrating that the acylation process of BChE-catalyzed hydrolysis of ghrelin follows an unprecedented single-step reaction pathway and the single-step acylation process is rate-determining. The free energy barrier (18.8 kcal/mol) calculated for the rate-determining step is reasonably close to the experimentally-derived free energy barrier (~19.4 kcal/mol), suggesting that the obtained mechanistic insights are reasonable. The single-step reaction pathway for the acylation is remarkably different from the well-known two-step acylation reaction pathway for numerous ester hydrolysis reactions catalyzed by a serine esterase. This is the first time demonstrating that a single-step reaction pathway is possible for an ester hydrolysis reaction catalyzed by a serine esterase and, therefore, one no longer can simply assume that the acylation process must follow the well-known two-step reaction pathway.

  5. Real-Time Simulation of the X-33 Aerospace Engine

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert

    1999-01-01

    This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.

  6. Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Cheng, Jiahao

    2018-02-01

    Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.

  7. Bifurcation analysis of a discrete-time ratio-dependent predator-prey model with Allee Effect

    NASA Astrophysics Data System (ADS)

    Cheng, Lifang; Cao, Hongjun

    2016-09-01

    A discrete-time predator-prey model with Allee effect is investigated in this paper. We consider the strong and the weak Allee effect (the population growth rate is negative and positive at low population density, respectively). From the stability analysis and the bifurcation diagrams, we get that the model with Allee effect (strong or weak) growth function and the model with logistic growth function have somewhat similar bifurcation structures. If the predator growth rate is smaller than its death rate, two species cannot coexist due to having no interior fixed points. When the predator growth rate is greater than its death rate and other parameters are fixed, the model can have two interior fixed points. One is always unstable, and the stability of the other is determined by the integral step size, which decides the species coexistence or not in some extent. If we increase the value of the integral step size, then the bifurcated period doubled orbits or invariant circle orbits may arise. So the numbers of the prey and the predator deviate from one stable state and then circulate along the period orbits or quasi-period orbits. When the integral step size is increased to a critical value, chaotic orbits may appear with many uncertain period-windows, which means that the numbers of prey and predator will be chaotic. In terms of bifurcation diagrams and phase portraits, we know that the complexity degree of the model with strong Allee effect decreases, which is related to the fact that the persistence of species can be determined by the initial species densities.

  8. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  9. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  10. Developing Poultry Facility Type Information from USDA Agricultural Census Data for Use in Epidemiological and Economic Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melius, C

    2007-12-05

    The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less

  11. Analysis of two-equation turbulence models for recirculating flows

    NASA Technical Reports Server (NTRS)

    Thangam, S.

    1991-01-01

    The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.

  12. Spin-charge-family theory is offering next step in understanding elementary particles and fields and correspondingly universe

    NASA Astrophysics Data System (ADS)

    Mankoč Borštnik, Norma Susana

    2017-05-01

    More than 40 years ago the standard model made a successful new step in understanding properties of fermion and boson fields. Now the next step is needed, which would explain what the standard model and the cosmological models just assume: a. The origin of quantum numbers of massless one family members. b. The origin of families. c. The origin of the vector gauge fields. d. The origin of the Higgses and Yukawa couplings. e. The origin of the dark matter. f. The origin of the matter-antimatter asymmetry. g. The origin of the dark energy. h. And several other open problems. The spin-charge-family theory, a kind of the Kaluza-Klein theories in (d = (2n - 1) + 1)-space-time, with d = (13 + 1) and the two kinds of the spin connection fields, which are the gauge fields of the two kinds of the Clifford algebra objects anti-commuting with one another, may provide this much needed next step. The talk presents: i. A short presentation of this theory. ii. The review over the achievements of this theory so far, with some not published yet achievements included. iii. Predictions for future experiments.

  13. From atoms to steps: The microscopic origins of crystal evolution

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2014-07-01

    The Burton-Cabrera-Frank (BCF) theory of crystal growth has been successful in describing a wide range of phenomena in surface physics. Typical crystal surfaces are slightly misoriented with respect to a facet plane; thus, the BCF theory views such systems as composed of staircase-like structures of steps separating terraces. Adsorbed atoms (adatoms), which are represented by a continuous density, diffuse on terraces, and steps move by absorbing or emitting these adatoms. Here we shed light on the microscopic origins of the BCF theory by deriving a simple, one-dimensional (1D) version of the theory from an atomistic, kinetic restricted solid-on-solid (KRSOS) model without external material deposition. We define the time-dependent adatom density and step position as appropriate ensemble averages in the KRSOS model, thereby exposing the non-equilibrium statistical mechanics origins of the BCF theory. Our analysis reveals that the BCF theory is valid in a low adatom-density regime, much in the same way that an ideal gas approximation applies to dilute gasses. We find conditions under which the surface remains in a low-density regime and discuss the microscopic origin of corrections to the BCF model.

  14. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  15. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  16. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    PubMed

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  17. Rise time and response measurements on a LiSOCl2 cell

    NASA Technical Reports Server (NTRS)

    Bastien, Caroline; Lecomte, Eric J.

    1992-01-01

    Dynamic impedance tests were performed on a 180 Ah LiSOCl2 cell in the frame of a short term work contract awarded by Aerospatiale as part of the Hermes Space Plane development work. These tests consisted of rise time and response measurements. The rise time test was performed to show the ability to deliver 4 KW, in the nominal voltage range (75-115 V), within less than 100 microseconds, and after a period at rest of 13 days. The response measurements test consisted of step response and frequency response tests. The frequency response test was performed to characterize the response of the LiSOCl2 cell to a positive or negative load step of 10 A starting from various currents. The test was performed for various depths of discharge and various temperatures. The test results were used to build a mathematical, electrical model of the LiSOCl2 cell which are also presented. The test description, test results, electrical modelization description, and conclusions are presented.

  18. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  19. From the Boltzmann to the Lattice-Boltzmann Equation:. Beyond BGK Collision Models

    NASA Astrophysics Data System (ADS)

    Philippi, Paulo Cesar; Hegele, Luiz Adolfo; Surmas, Rodrigo; Siebert, Diogo Nardelli; Dos Santos, Luís Orlando Emerich

    In this work, we present a derivation for the lattice-Boltzmann equation directly from the linearized Boltzmann equation, combining the following main features: multiple relaxation times and thermodynamic consistency in the description of non isothermal compressible flows. The method presented here is based on the discretization of increasingly order kinetic models of the Boltzmann equation. Following a Gross-Jackson procedure, the linearized collision term is developed in Hermite polynomial tensors and the resulting infinite series is diagonalized after a chosen integer N, establishing the order of approximation of the collision term. The velocity space is discretized, in accordance with a quadrature method based on prescribed abscissas (Philippi et al., Phys. Rev E 73, 056702, 2006). The problem of describing the energy transfer is discussed, in relation with the order of approximation of a two relaxation-times lattice Boltzmann model. The velocity-step, temperature-step and the shock tube problems are investigated, adopting lattices with 37, 53 and 81 velocities.

  20. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca

    2016-03-15

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating themore » solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.« less

Top