Sample records for time model based

  1. Consequences of Base Time for Redundant Signals Experiments

    PubMed Central

    Townsend, James T.; Honey, Christopher

    2007-01-01

    We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591

  2. Is questionnaire-based sitting time inaccurate and can it be improved? A cross-sectional investigation using accelerometer-based sitting time

    PubMed Central

    Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas

    2017-01-01

    Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1–4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to −139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. Conclusions This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. PMID:28093433

  3. Is questionnaire-based sitting time inaccurate and can it be improved? A cross-sectional investigation using accelerometer-based sitting time.

    PubMed

    Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas

    2017-01-16

    To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1-4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to -139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  5. A method for diagnosing time dependent faults using model-based reasoning systems

    NASA Technical Reports Server (NTRS)

    Goodrich, Charles H.

    1995-01-01

    This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.

  6. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    PubMed

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  7. Emotional moments across time: a possible neural basis for time perception in the anterior insula

    PubMed Central

    Craig, A.D. (Bud)

    2009-01-01

    A model of awareness based on interoceptive salience is described, which has an endogenous time base that might provide a basis for the human capacity to perceive and estimate time intervals in the range of seconds to subseconds. The model posits that the neural substrate for awareness across time is located in the anterior insular cortex, which fits with recent functional imaging evidence relevant to awareness and time perception. The time base in this model is adaptive and emotional, and thus it offers an explanation for some aspects of the subjective nature of time perception. This model does not describe the mechanism of the time base, but it suggests a possible relationship with interoceptive afferent activity, such as heartbeat-related inputs. PMID:19487195

  8. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  9. Solution algorithm of dwell time in slope-based figuring model

    NASA Astrophysics Data System (ADS)

    Li, Yong; Zhou, Lin

    2017-10-01

    Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.

  10. Robust model predictive control for constrained continuous-time nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  11. Immersed boundary lattice Boltzmann model based on multiple relaxation times

    NASA Astrophysics Data System (ADS)

    Lu, Jianhua; Han, Haifeng; Shi, Baochang; Guo, Zhaoli

    2012-01-01

    As an alterative version of the lattice Boltzmann models, the multiple relaxation time (MRT) lattice Boltzmann model introduces much less numerical boundary slip than the single relaxation time (SRT) lattice Boltzmann model if some special relationship between the relaxation time parameters is chosen. On the other hand, most current versions of the immersed boundary lattice Boltzmann method, which was first introduced by Feng and improved by many other authors, suffer from numerical boundary slip as has been investigated by Le and Zhang. To reduce such a numerical boundary slip, an immerse boundary lattice Boltzmann model based on multiple relaxation times is proposed in this paper. A special formula is given between two relaxation time parameters in the model. A rigorous analysis and the numerical experiments carried out show that the numerical boundary slip reduces dramatically by using the present model compared to the single-relaxation-time-based model.

  12. Automated time activity classification based on global positioning system (GPS) tracking data

    PubMed Central

    2011-01-01

    Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316

  13. Automated time activity classification based on global positioning system (GPS) tracking data.

    PubMed

    Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph

    2011-11-14

    Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.

  14. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes

    PubMed Central

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903

  15. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.

    PubMed

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

  16. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  17. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  18. A bootstrap based space-time surveillance model with an application to crime occurrences

    NASA Astrophysics Data System (ADS)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  19. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  20. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  1. Development of a Clinical Forecasting Model to Predict Comorbid Depression Among Diabetes Patients and an Application in Depression Screening Policy Making.

    PubMed

    Jin, Haomiao; Wu, Shinyi; Di Capua, Paul

    2015-09-03

    Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.

  2. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Time-dependent oral absorption models

    NASA Technical Reports Server (NTRS)

    Higaki, K.; Yamashita, S.; Amidon, G. L.

    2001-01-01

    The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.

  4. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  5. Model-assisted template extraction SRAF application to contact holes patterns in high-end flash memory device fabrication

    NASA Astrophysics Data System (ADS)

    Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha

    2018-03-01

    Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.

  6. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  7. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  8. Predicting drowsy driving in real-time situations: Using an advanced driving simulator, accelerated failure time model, and virtual location-based services.

    PubMed

    Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua

    2017-02-01

    This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Clustering of financial time series

    NASA Astrophysics Data System (ADS)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  10. Modern Methods for Modeling Change in Obesity Research in Nursing.

    PubMed

    Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E

    2017-08-01

    Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.

  11. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  12. Disentangling the causal relationships between work-home interference and employee health.

    PubMed

    van Hooff, Madelon L M; Geurts, Sabine A E; Taris, Toon W; Kompier, Michiel A J; Dikkers, Josje S E; Houtman, Irene L D; van den Heuvel, Floor M M

    2005-02-01

    The present study was designed to investigate the causal relationships between (time- and strain-based) work-home interference and employee health. The effort-recovery theory provided the theoretical basis for this study. Two-phase longitudinal data (with a 1-year time lag) were gathered from 730 Dutch police officers to test the following hypotheses with structural equation modeling: (i) work-home interference predicts health deterioration, (ii) health complaints precede increased levels of such interference, and (iii) both processes operate. The relationship between stable and changed levels of work-home interference across time and their relationships with the course of health were tested with a group-by-time analysis of variance. Four subgroups were created that differed in starting point and the development of work-home interference across time. The normal causal model, in which strain-based (but not time-based) work-home interference was longitudinally related to increased health complaints 1 year later, fit the data well and significantly better than the reversed causal model. Although the reciprocal model also provided a good fit, it was less parsimonious than the normal causal model. In addition, both an increment in (strain-based) work-home interference across time and a long-lasting experience of high (strain-based) work-home interference were associated with a deterioration in health. It was concluded that (strain-based) work-home interference acts as a precursor of health impairment and that different patterns of (strain-based) work-home interference across time are related to different health courses. Particularly long-term experience of (strain-based) work-home interference seems responsible for an accumulation of health complaints.

  13. Comparing Budget-based and Tracer-based Residence Times in Butte Basin, California

    NASA Astrophysics Data System (ADS)

    Moran, J. E.; Visser, A.; Esser, B.; Buck, C.

    2017-12-01

    The California Sustainable Groundwater Management Act of 2014 (SGMA) calls for basin-scale Groundwater Sustainability Plans (GSPs) that include a water budget covering a 50 year planning horizon. A nine layer, Integrated Water Flow Model (IWFM) developed for Butte Basin, California, allows examination of water budgets within 36 sub-regions having varying land and water use, to inform SGMA efforts. Detailed land use, soil type, groundwater pumping, and surface water delivery data were applied in the finite element IWFM calibration. In a sustainable system, the volume of storage does not change over a defined time period, and the residence time can be calculated from the water storage volume divided by the flux (recharge or discharge rate). Groundwater ages based on environmental tracer data reflect the mean residence time of groundwater, or its inverse, the turnover rate. Comparisons between budget-based residence times determined from storage and flux, and residence times determined from isotopic tracers of groundwater age, can provide insight into data quality, model reliability, and system sustainability. Budget-based groundwater residence times were calculated from IWFM model output by assuming constant storage and dividing by either averaged annual net recharge or discharge. Calculated residence times range between approximately 100 and 1000 years, with shorter times in subregions where pumping dominates discharge. Independently, 174 wells within the model boundaries were analyzed for tritium-helium groundwater age as part of the California Groundwater Ambient Monitoring and Assessment program. Age distributions from isotopic tracers were compared to model-derived groundwater residence times from groundwater budgets within the subregions of Butte Basin. Mean, apparent, tracer-based residence times are mostly between 20 and 40 years, but 25% of the long-screened wells that were sampled do not have detectable tritium, indicating residence times of more than about 60 years and broad age distributions. A key factor in making meaningful comparisons is to examine budget-based and tracer-based results over transmissive vertical sections, where pumping increases turnover time.

  14. Study of a Terrain-Based Motion Estimation Model to Predict the Position of a Moving Target to Enhance Weapon Probability of Kill

    DTIC Science & Technology

    2017-09-01

    target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO

  15. A hybrid algorithm for clustering of time series data based on affinity search technique.

    PubMed

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.

  16. A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique

    PubMed Central

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966

  17. Time does not cause forgetting in short-term serial recall.

    PubMed

    Lewandowsky, Stephan; Duncan, Matthew; Brown, Gordon D A

    2004-10-01

    Time-based theories expect memory performance to decline as the delay between study and recall of an item increases. The assumption of time-based forgetting, central to many models of serial recall, underpins their key behaviors. Here we compare the predictions of time-based and event-based models by simulation and test them in two experiments using a novel manipulation of the delay between study and retrieval. Participants were trained, via corrective feedback, to recall at different speeds, thus varying total recall time from 6 to 10 sec. In the first experiment, participants used the keyboard to enter their responses but had to repeat a word (called the suppressor) aloud during recall to prevent rehearsal. In the second experiment, articulation was again required, but recall was verbal and was paced by the number of repetitions of the suppressor in between retrieval of items. In both experiments, serial position curves for all retrieval speeds overlapped, and output time had little or no effect. Comparative evaluation of a time-based and an event-based model confirmed that these results present a particular challenge to time-based approaches. We conclude that output interference, rather than output time, is critical in serial recall.

  18. Modeling the long-term evolution of space debris

    DOEpatents

    Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.

    2017-03-07

    A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.

  19. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  20. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    NASA Astrophysics Data System (ADS)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  1. Experimental verification of a real-time tuning method of a model-based controller by perturbations to its poles

    NASA Astrophysics Data System (ADS)

    Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi

    2018-07-01

    Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.

  2. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  3. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  4. A Kinetic Model Describing Injury-Burden in Team Sports.

    PubMed

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  5. An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory

    PubMed Central

    Mousavi, Ali; Clarkson, P. John; Young, Terry

    2015-01-01

    This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation. PMID:27170899

  6. An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory.

    PubMed

    Komashie, Alexander; Mousavi, Ali; Clarkson, P John; Young, Terry

    2015-01-01

    This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation.

  7. Time-dependent pharmacokinetics of dexamethasone and its efficacy in human breast cancer xenograft mice: a semi-mechanism-based pharmacokinetic/pharmacodynamic model.

    PubMed

    Li, Jian; Chen, Rong; Yao, Qing-Yu; Liu, Sheng-Jun; Tian, Xiu-Yun; Hao, Chun-Yi; Lu, Wei; Zhou, Tian-Yan

    2018-03-01

    Dexamethasone (DEX) is the substrate of CYP3A. However, the activity of CYP3A could be induced by DEX when DEX was persistently administered, resulting in auto-induction and time-dependent pharmacokinetics (pharmacokinetics with time-dependent clearance) of DEX. In this study we investigated the pharmacokinetic profiles of DEX after single or multiple doses in human breast cancer xenograft nude mice and established a semi-mechanism-based pharmacokinetic/pharmacodynamic (PK/PD) model for characterizing the time-dependent PK of DEX as well as its anti-cancer effect. The mice were orally given a single or multiple doses (8 mg/kg) of DEX, and the plasma concentrations of DEX were assessed using LC-MS/MS. Tumor volumes were recorded daily. Based on the experimental data, a two-compartment model with first order absorption and time-dependent clearance was established, and the time-dependence of clearance was modeled by a sigmoid E max equation. Moreover, a semi-mechanism-based PK/PD model was developed, in which the auto-induction effect of DEX on its metabolizing enzyme CYP3A was integrated and drug potency was described using an E max equation. The PK/PD model was further used to predict the drug efficacy when the auto-induction effect was or was not considered, which further revealed the necessity of adding the auto-induction effect into the final PK/PD model. This study established a semi-mechanism-based PK/PD model for characterizing the time-dependent pharmacokinetics of DEX and its anti-cancer effect in breast cancer xenograft mice. The model may serve as a reference for DEX dose adjustments or optimization in future preclinical or clinical studies.

  8. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  9. Carotid MR angiography with traditional bolus timing: clinical observations and Fourier-based modelling of contrast kinetics.

    PubMed

    Menke, Jan

    2009-11-01

    This study analyses the relation between image quality and contrast kinetics in bolus-timed carotid magnetic resonance angiography (MRA) and interprets the findings by Fourier-based numerical modelling. One hundred patients prone to carotid stenosis were studied using contrast-enhanced carotid MRA with bolus timing. The carotid MRAs were timed to start relatively early without accounting for the injection time of the contrast medium. For interpretation different starting times were modelled, utilising the spectral information of the test bolus series. In the test bolus series the arterial time-to-peak showed a large 95% confidence interval of 12-27 s, indicating the need for individual MRA timing. All bolus-timed MRAs were of good diagnostic quality. The mean (+/-SD) arterial contrast-to-noise ratio was 53.0 (+/-12.8) and thus high, and 95% of the MRAs showed a slight venous contamination of 11.8% or less (median 5.6%). According to the Fourier-based modelling the central k-space may be acquired about 2 s before the arterial contrast peak. This results in carotid MRAs with sufficiently high arterial enhancement and little venous contamination. In conclusion, in bolus-timed carotid MRA a relatively short timing provides good arterial contrast with little venous contamination, which can be explained by Fourier-based numerical modelling of the contrast kinetics.

  10. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  11. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  12. Partition-based discrete-time quantum walks

    NASA Astrophysics Data System (ADS)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  13. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  14. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    PubMed

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  15. A Method for Generating Reduced Order Linear Models of Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1997-01-01

    For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.

  16. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  18. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE PAGES

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...

    2017-12-05

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  19. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  20. A framework for the use of agent based modeling to simulate ...

    EPA Pesticide Factsheets

    Simulation of human behavior in exposure modeling is a complex task. Traditionally, inter-individual variation in human activity has been modeled by drawing from a pool of single day time-activity diaries such as the US EPA Consolidated Human Activity Database (CHAD). Here, an agent-based model (ABM) is used to simulate population distributions of longitudinal patterns of four macro activities (sleeping, eating, working, and commuting) in populations of adults over a period of one year. In this ABM, an individual is modeled as an agent whose movement through time and space is determined by a set of decision rules. The rules are based on the agent having time-varying “needs” that are satisfied by performing actions. Needs are modeled as increasing over time, and taking an action reduces the need. Need-satisfying actions include sleeping (meeting the need for rest), eating (meeting the need for food), and commuting/working (meeting the need for income). Every time an action is completed, the model determines the next action the agent will take based on the magnitude of each of the agent’s needs at that point in time. Different activities advertise their ability to satisfy various needs of the agent (such as food to eat or sleeping in a bed or on a couch). The model then chooses the activity that satisfies the greatest of the agent’s needs. When multiple actions could address a need, the model will choose the most effective of the actions (bed over the couc

  1. Hybrid discrete-time neural networks.

    PubMed

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  2. Opportunistic Capacity-Based Resource Allocation for Chunk-Based Multi-Carrier Cognitive Radio Sensor Networks

    PubMed Central

    Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi

    2017-01-01

    The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803

  3. Chemistry resolved kinetic flow modeling of TATB based explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark

    2012-03-01

    Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.

  4. An assembly process model based on object-oriented hierarchical time Petri Nets

    NASA Astrophysics Data System (ADS)

    Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui

    2017-04-01

    In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.

  5. Angular Random Walk Estimation of a Time-Domain Switching Micromachined Gyroscope

    DTIC Science & Technology

    2016-10-19

    1 2. PARAMETRIC SYSTEM IDENTIFICATION BASED ON TIME-DOMAIN SWITCHING ........ 2 3. FINITE ELEMENT MODELING OF RESONATOR...8 3. FINITE ELEMENT MODELING OF RESONATOR This section details basic finite element modeling of the resonator used with the TDSMG. While it...Based on finite element simulations of the employed resonator, it is found that the effects of thermomechanical noise is on par with 10 ps of timing

  6. [Real-time irrigation forecast of cotton mulched with plastic film under drip irrigation based on meteorological date].

    PubMed

    Shen, Xiao-jun; Sun, Jing-sheng; Li, Ming-si; Zhang, Ji-yang; Wang, Jing-lei; Li, Dong-wei

    2015-02-01

    It is important to improve the real-time irrigation forecasting precision by predicting real-time water consumption of cotton mulched with plastic film under drip irrigation based on meteorological data and cotton growth status. The model parameters for calculating ET0 based on Hargreaves formula were determined using historical meteorological data from 1953 to 2008 in Shihezi reclamation area. According to the field experimental data of growing season in 2009-2010, the model of computing crop coefficient Kc was established based on accumulated temperature. On the basis of crop water requirement (ET0) and Kc, a real-time irrigation forecast model was finally constructed, and it was verified by the field experimental data in 2011. The results showed that the forecast model had high forecasting precision, and the average absolute values of relative error between the predicted value and measured value were about 3.7%, 2.4% and 1.6% during seedling, squaring and blossom-boll forming stages, respectively. The forecast model could be used to modify the predicted values in time according to the real-time meteorological data and to guide the water management in local film-mulched cotton field under drip irrigation.

  7. The Elastic Body Model: A Pedagogical Approach Integrating Real Time Measurements and Modelling Activities

    ERIC Educational Resources Information Center

    Fazio, C.; Guastella, I.; Tarantino, G.

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the…

  8. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  9. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    NASA Astrophysics Data System (ADS)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  10. Real-time simulation of a Doubly-Fed Induction Generator based wind power system on eMEGASimRTM Real-Time Digital Simulator

    NASA Astrophysics Data System (ADS)

    Boakye-Boateng, Nasir Abdulai

    The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.

  11. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Personalized State-space Modeling of Glucose Dynamics for Type 1 Diabetes Using Continuously Monitored Glucose, Insulin Dose, and Meal Intake: An Extended Kalman Filter Approach.

    PubMed

    Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan

    2014-03-01

    An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.

  14. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  15. A Sarsa(λ)-based control model for real-time traffic light coordination.

    PubMed

    Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  16. Short-Term Memory Scanning Viewed as Exemplar-Based Categorization

    ERIC Educational Resources Information Center

    Nosofsky, Robert M.; Little, Daniel R.; Donkin, Christopher; Fific, Mario

    2011-01-01

    Exemplar-similarity models such as the exemplar-based random walk (EBRW) model (Nosofsky & Palmeri, 1997b) were designed to provide a formal account of multidimensional classification choice probabilities and response times (RTs). At the same time, a recurring theme has been to use exemplar models to account for old-new item recognition and to…

  17. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 1. OVERVIEW

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four report volumes. Moreover, the tests are generally applicable to other model evaluation problem...

  18. Application of seemingly unrelated regression in medical data with intermittently observed time-dependent covariates.

    PubMed

    Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam

    2012-01-01

    BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.

  19. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  20. Transient modeling in simulation of hospital operations for emergency response.

    PubMed

    Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li

    2006-01-01

    Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.

  1. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  2. Can air temperature be used to project influences of climate change on stream temperature?

    USGS Publications Warehouse

    Arismendi, Ivan; Safeeq, Mohammad; Dunham, Jason B.; Johnson, Sherri L.

    2014-01-01

    Worldwide, lack of data on stream temperature has motivated the use of regression-based statistical models to predict stream temperatures based on more widely available data on air temperatures. Such models have been widely applied to project responses of stream temperatures under climate change, but the performance of these models has not been fully evaluated. To address this knowledge gap, we examined the performance of two widely used linear and nonlinear regression models that predict stream temperatures based on air temperatures. We evaluated model performance and temporal stability of model parameters in a suite of regulated and unregulated streams with 11–44 years of stream temperature data. Although such models may have validity when predicting stream temperatures within the span of time that corresponds to the data used to develop them, model predictions did not transfer well to other time periods. Validation of model predictions of most recent stream temperatures, based on air temperature–stream temperature relationships from previous time periods often showed poor performance when compared with observed stream temperatures. Overall, model predictions were less robust in regulated streams and they frequently failed in detecting the coldest and warmest temperatures within all sites. In many cases, the magnitude of errors in these predictions falls within a range that equals or exceeds the magnitude of future projections of climate-related changes in stream temperatures reported for the region we studied (between 0.5 and 3.0 °C by 2080). The limited ability of regression-based statistical models to accurately project stream temperatures over time likely stems from the fact that underlying processes at play, namely the heat budgets of air and water, are distinctive in each medium and vary among localities and through time.

  3. Process-based modeling of temperature and water profiles in the seedling recruitment zone: Part II. Seedling emergence timing

    USDA-ARS?s Scientific Manuscript database

    Predictions of seedling emergence timing for spring wheat are facilitated by process-based modeling of the microsite environment in the shallow seedling recruitment zone. Hourly temperature and water profiles within the recruitment zone for 60 days after planting were simulated from the process-base...

  4. Multiscale Modeling of Angiogenesis and Predictive Capacity

    NASA Astrophysics Data System (ADS)

    Pillay, Samara; Byrne, Helen; Maini, Philip

    Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.

  5. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  6. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  7. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  8. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    NASA Astrophysics Data System (ADS)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  9. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  10. Research on light rail electric load forecasting based on ARMA model

    NASA Astrophysics Data System (ADS)

    Huang, Yifan

    2018-04-01

    The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.

  11. Finite-element time-domain modeling of electromagnetic data in general dispersive medium using adaptive Padé series

    NASA Astrophysics Data System (ADS)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin; Zhdanov, Michael S.

    2017-12-01

    The induced polarization (IP) method has been widely used in geophysical exploration to identify the chargeable targets such as mineral deposits. The inversion of the IP data requires modeling the IP response of 3D dispersive conductive structures. We have developed an edge-based finite-element time-domain (FETD) modeling method to simulate the electromagnetic (EM) fields in 3D dispersive medium. We solve the vector Helmholtz equation for total electric field using the edge-based finite-element method with an unstructured tetrahedral mesh. We adopt the backward propagation Euler method, which is unconditionally stable, with semi-adaptive time stepping for the time domain discretization. We use the direct solver based on a sparse LU decomposition to solve the system of equations. We consider the Cole-Cole model in order to take into account the frequency-dependent conductivity dispersion. The Cole-Cole conductivity model in frequency domain is expanded using a truncated Padé series with adaptive selection of the center frequency of the series for early and late time. This approach can significantly increase the accuracy of FETD modeling.

  12. Advancing Physically-Based Flow Simulations of Alluvial Systems Through Atmospheric Noble Gases and the Novel 37Ar Tracer Method

    NASA Astrophysics Data System (ADS)

    Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip

    2017-12-01

    To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.

  13. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    NASA Astrophysics Data System (ADS)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  14. An innovative time-cost-quality tradeoff modeling of building construction project based on resource allocation.

    PubMed

    Hu, Wenfa; He, Xinhua

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.

  15. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 3. PROGRAM USER'S GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  16. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 4. EVALUATION GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  17. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  18. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  19. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  20. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  1. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    PubMed

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. A Sarsa(λ)-Based Control Model for Real-Time Traffic Light Coordination

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control. PMID:24592183

  3. Anticipation by basketball defenders: an explanation based on the three-dimensional inverted pendulum model.

    PubMed

    Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo

    2014-01-01

    We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.

  4. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  5. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  6. Operational Space Weather Models: Trials, Tribulations and Rewards

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Thompson, D. C.; Zhu, L.

    2009-12-01

    There are many empirical, physics-based, and data assimilation models that can probably be used for space weather applications and the models cover the entire domain from the surface of the Sun to the Earth’s surface. At Utah State University we developed two physics-based data assimilation models of the terrestrial ionosphere as part of a program called Global Assimilation of Ionospheric Measurements (GAIM). One of the data assimilation models is now in operational use at the Air Force Weather Agency (AFWA) in Omaha, Nebraska. This model is a Gauss-Markov Kalman Filter (GAIM-GM) model, and it uses a physics-based model of the ionosphere and a Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) measurements. The physics-based model is the Ionosphere Forecast Model (IFM), which is global and covers the E-region, F-region, and topside ionosphere from 90 to 1400 km. It takes account of five ion species (NO+, O2+, N2+, O+, H+), but the main output of the model is a 3-dimensional electron density distribution at user specified times. The second data assimilation model uses a physics-based Ionosphere-Plasmasphere Model (IPM) and an ensemble Kalman filter technique as a basis for assimilating a diverse set of real-time (or near real-time) measurements. This Full Physics model (GAIM-FP) is global, covers the altitude range from 90 to 30,000 km, includes six ions (NO+, O2+, N2+, O+, H+, He+), and calculates the self-consistent ionospheric drivers (electric fields and neutral winds). The GAIM-FP model is scheduled for delivery in 2012. Both of these GAIM models assimilate bottom-side Ne profiles from a variable number of ionosondes, slant TEC from a variable number of ground GPS/TEC stations, in situ Ne from four DMSP satellites, line-of-sight UV emissions measured by satellites, and occultation data. Quality control algorithms for all of the data types are provided as an integral part of the GAIM models and these models take account of latent data (up to 3 hours). The trials, tribulations and rewards of constructing and maintaining operational data assimilation models will be discussed.

  7. Dynamic ambulance reallocation for the reduction of ambulance response times using system status management.

    PubMed

    Lam, Sean Shao Wei; Zhang, Ji; Zhang, Zhong Cheng; Oh, Hong Choon; Overton, Jerry; Ng, Yih Yng; Ong, Marcus Eng Hock

    2015-02-01

    Dynamically reassigning ambulance deployment locations throughout a day to balance ambulance availability and demands can be effective in reducing response times. The objectives of this study were to model dynamic ambulance allocation plans in Singapore based on the system status management (SSM) strategy and to evaluate the dynamic deployment plans using a discrete event simulation (DES) model. The geographical information system-based analysis and mathematical programming were used to develop the dynamic ambulance deployment plans for SSM based on ambulance calls data from January 1, 2011, to June 30, 2011. A DES model that incorporated these plans was used to compare the performance of the dynamic SSM strategy against static reallocation policies under various demands and travel time uncertainties. When the deployment plans based on the SSM strategy were followed strictly, the DES model showed that the geographical information system-based plans resulted in approximately 13-second reduction in the median response times compared to the static reallocation policy, whereas the mathematical programming-based plans resulted in approximately a 44-second reduction. The response times and coverage performances were still better than the static policy when reallocations happened for only 60% of all the recommended moves. Dynamically reassigning ambulance deployment locations based on the SSM strategy can result in superior response times and coverage performance compared to static reallocation policies even when the dynamic plans were not followed strictly. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  10. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Treesearch

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  11. The Effect of Inquiry Training Learning Model Based on Just in Time Teaching for Problem Solving Skill

    ERIC Educational Resources Information Center

    Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda

    2016-01-01

    One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…

  12. A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model

    PubMed Central

    Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin

    2009-01-01

    A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542

  13. Modelling endurance and resumption times for repetitive one-hand pushing.

    PubMed

    Rose, Linda M; Beauchemin, Catherine A A; Neumann, W Patrick

    2018-07-01

    This study's objective was to develop models of endurance time (ET), as a function of load level (LL), and of resumption time (RT) after loading as a function of both LL and loading time (LT) for repeated loadings. Ten male participants with experience in construction work each performed 15 different one-handed repetaed pushing tasks at shoulder height with varied exerted force and duration. These data were used to create regression models predicting ET and RT. It is concluded that power law relationships are most appropriate to use when modelling ET and RT. While the data the equations are based on are limited regarding number of participants, gender, postures, magnitude and type of exerted force, the paper suggests how this kind of modelling can be used in job design and in further research. Practitioner Summary: Adequate muscular recovery during work-shifts is important to create sustainable jobs. This paper describes mathematical modelling and presents models for endurance times and resumption times (an aspect of recovery need), based on data from an empirical study. The models can be used to help manage fatigue levels in job design.

  14. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  15. Retrospective estimation of breeding phenology of American Goldfinch (Carduelis tristis) using pattern oriented modeling

    EPA Science Inventory

    Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...

  16. USE OF TRANS-CONTEXTUAL MODEL-BASED PHYSICAL ACTIVITY COURSE IN DEVELOPING LEISURE-TIME PHYSICAL ACTIVITY BEHAVIOR OF UNIVERSITY STUDENTS.

    PubMed

    Müftüler, Mine; İnce, Mustafa Levent

    2015-08-01

    This study examined how a physical activity course based on the Trans-Contextual Model affected the variables of perceived autonomy support, autonomous motivation, determinants of leisure-time physical activity behavior, basic psychological needs satisfaction, and leisure-time physical activity behaviors. The participants were 70 Turkish university students (M age=23.3 yr., SD=3.2). A pre-test-post-test control group design was constructed. Initially, the participants were randomly assigned into an experimental (n=35) and a control (n=35) group. The experimental group followed a 12 wk. trans-contextual model-based intervention. The participants were pre- and post-tested in terms of Trans-Contextual Model constructs and of self-reported leisure-time physical activity behaviors. Multivariate analyses showed significant increases over the 12 wk. period for perceived autonomy support from instructor and peers, autonomous motivation in leisure-time physical activity setting, positive intention and perceived behavioral control over leisure-time physical activity behavior, more fulfillment of psychological needs, and more engagement in leisure-time physical activity behavior in the experimental group. These results indicated that the intervention was effective in developing leisure-time physical activity and indicated that the Trans-Contextual Model is a useful way to conceptualize these relationships.

  17. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  18. An architecture for the development of real-time fault diagnosis systems using model-based reasoning

    NASA Technical Reports Server (NTRS)

    Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday

    1992-01-01

    Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.

  19. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  20. A statistical learning approach to the modeling of chromatographic retention of oligonucleotides incorporating sequence and secondary structure data

    PubMed Central

    Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver

    2007-01-01

    We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619

  1. REAL-TIME MODEL-BASED ELECTRICAL POWERED WHEELCHAIR CONTROL

    PubMed Central

    Wang, Hongwu; Salatin, Benjamin; Grindle, Garrett G.; Ding, Dan; Cooper, Rory A.

    2009-01-01

    The purpose of this study was to evaluate the effects of three different control methods on driving speed variation and wheel-slip of an electric-powered wheelchair (EPW). A kinematic model as well as 3-D dynamic model was developed to control the velocity and traction of the wheelchair. A smart wheelchair platform was designed and built with a computerized controller and encoders to record wheel speeds and to detect the slip. A model based, a proportional-integral-derivative (PID) and an open-loop controller were applied with the EPW driving on four different surfaces at three specified speeds. The speed errors, variation, rise time, settling time and slip coefficient were calculated and compared for a speed step-response input. Experimental results showed that model based control performed best on all surfaces across the speeds. PMID:19733494

  2. Pattern formation in individual-based systems with time-varying parameters

    NASA Astrophysics Data System (ADS)

    Ashcroft, Peter; Galla, Tobias

    2013-12-01

    We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.

  3. A whole-body physiologically based pharmacokinetic (WB-PBPK) model of ciprofloxacin: a step towards predicting bacterial killing at sites of infection.

    PubMed

    Sadiq, Muhammad W; Nielsen, Elisabet I; Khachman, Dalia; Conil, Jean-Marie; Georges, Bernard; Houin, Georges; Laffont, Celine M; Karlsson, Mats O; Friberg, Lena E

    2017-04-01

    The purpose of this study was to develop a whole-body physiologically based pharmacokinetic (WB-PBPK) model for ciprofloxacin for ICU patients, based on only plasma concentration data. In a next step, tissue and organ concentration time profiles in patients were predicted using the developed model. The WB-PBPK model was built using a non-linear mixed effects approach based on data from 102 adult intensive care unit patients. Tissue to plasma distribution coefficients (Kp) were available from the literature and used as informative priors. The developed WB-PBPK model successfully characterized both the typical trends and variability of the available ciprofloxacin plasma concentration data. The WB-PBPK model was thereafter combined with a pharmacokinetic-pharmacodynamic (PKPD) model, developed based on in vitro time-kill data of ciprofloxacin and Escherichia coli to illustrate the potential of this type of approach to predict the time-course of bacterial killing at different sites of infection. The predicted unbound concentration-time profile in extracellular tissue was driving the bacterial killing in the PKPD model and the rate and extent of take-over of mutant bacteria in different tissues were explored. The bacterial killing was predicted to be most efficient in lung and kidney, which correspond well to ciprofloxacin's indications pneumonia and urinary tract infections. Furthermore, a function based on available information on bacterial killing by the immune system in vivo was incorporated. This work demonstrates the development and application of a WB-PBPK-PD model to compare killing of bacteria with different antibiotic susceptibility, of value for drug development and the optimal use of antibiotics .

  4. Deformation analysis of polymers composites: rheological model involving time-based fractional derivative

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Yi, H. Y.; Mishnaevsky, L.; Wang, R.; Duan, Z. Q.; Chen, Q.

    2017-05-01

    A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors.

  5. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  6. An Innovative Time-Cost-Quality Tradeoff Modeling of Building Construction Project Based on Resource Allocation

    PubMed Central

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351

  7. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation

    PubMed Central

    Kong, Zehui; Liu, Teng

    2017-01-01

    To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control. PMID:28671967

  8. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation.

    PubMed

    Kong, Zehui; Zou, Yuan; Liu, Teng

    2017-01-01

    To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control.

  9. Estimation of the potential leakage of the chemical munitions based on two hydrodynamical models implemented for the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jakacki, Jaromir; Golenko, Mariya

    2014-05-01

    Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.

  10. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  11. Modeling of the attenuation of stress waves in concrete based on the Rayleigh damping model using time-reversal and PZT transducers

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Huo, Linsheng; Gao, Weihang; Li, Hongnan; Song, Gangbing

    2017-10-01

    Wave-based concrete structural health monitoring has attracted much attention. A stress wave experiences significant attenuation in concrete, however there is a lack of a unified method for predicting the attenuation coefficient of the stress wave. In this paper, a simple and effective absorption attenuation model of stress waves in concrete is developed based on the Rayleigh damping model, which indicates that the absorption attenuation coefficient of stress waves in concrete is directly proportional to the square of the stress wave frequency when the damping ratio is small. In order to verify the theoretical model, related experiments were carried out. During the experiments, a concrete beam was designed in which the d33-model piezoelectric smart aggregates were embedded to detect the propagation of stress waves. It is difficult to distinguish direct stress waves due to the complex propagation paths and the reflection and scattering of stress waves in concrete. Hence, as another innovation of this paper, a new method for computing the absorption attenuation coefficient based on the time-reversal method is developed. Due to the self-adaptive focusing properties of the time-reversal method, the time-reversed stress wave focuses and generates a peak value. The time-reversal method eliminates the adverse effects of multipaths, reflection, and scattering. The absorption attenuation coefficient is computed by analyzing the peak value changes of the time-reversal focused signal. Finally, the experimental results are found to be in good agreement with the theoretical model.

  12. Lapse of time effects on tax evasion in an agent-based econophysics model

    NASA Astrophysics Data System (ADS)

    Seibold, Götz; Pickhardt, Michael

    2013-05-01

    We investigate an inhomogeneous Ising model in the context of tax evasion dynamics where different types of agents are parameterized via local temperatures and magnetic fields. In particular, we analyze the impact of lapse of time effects (i.e. backauditing) and endogenously determined penalty rates on tax compliance. Both features contribute to a microfoundation of agent-based econophysics models of tax evasion.

  13. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-12-18

    This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  14. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  15. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  16. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  17. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  18. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  19. The IRGen infrared data base modeler

    NASA Technical Reports Server (NTRS)

    Bernstein, Uri

    1993-01-01

    IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.

  20. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  1. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  2. Student Modeling Based on Problem Solving Times

    ERIC Educational Resources Information Center

    Pelánek, Radek; Jarušek, Petr

    2015-01-01

    Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…

  3. Automated real time constant-specificity surveillance for disease outbreaks.

    PubMed

    Wieland, Shannon C; Brownstein, John S; Berger, Bonnie; Mandl, Kenneth D

    2007-06-13

    For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p < 0.05). We develop an outbreak detection method, called the expectation-variance model, based on generalized additive modeling to achieve a constant specificity by accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.

  4. The effectiveness of snow cube throwing learning model based on exploration

    NASA Astrophysics Data System (ADS)

    Sari, Nenden Mutiara

    2017-08-01

    This study aimed to know the effectiveness of Snow Cube Throwing (SCT) and Cooperative Model in Exploration-Based Math Learning in terms of the time required to complete the teaching materials and student engagement. This study was quasi-experimental research was conducted at SMPN 5 Cimahi, Indonesia. All student in grade VIII SMPN 5 Cimahi which consists of 382 students is used as population. The sample consists of two classes which had been chosen randomly with purposive sampling. First experiment class consists of 38 students and the second experiment class consists of 38 students. Observation sheet was used to observe the time required to complete the teaching materials and record the number of students involved in each meeting. The data obtained was analyzed by independent sample-t test and used the chart. The results of this study: SCT learning model based on exploration are more effective than cooperative learning models based on exploration in terms of the time required to complete teaching materials based on exploration and student engagement.

  5. The dynamic financial distress prediction method of EBW-VSTW-SVM

    NASA Astrophysics Data System (ADS)

    Sun, Jie; Li, Hui; Chang, Pei-Chann; He, Kai-Yu

    2016-07-01

    Financial distress prediction (FDP) takes important role in corporate financial risk management. Most of former researches in this field tried to construct effective static FDP (SFDP) models that are difficult to be embedded into enterprise information systems, because they are based on horizontal data-sets collected outside the modelling enterprise by defining the financial distress as the absolute conditions such as bankruptcy or insolvency. This paper attempts to propose an approach for dynamic evaluation and prediction of financial distress based on the entropy-based weighting (EBW), the support vector machine (SVM) and an enterprise's vertical sliding time window (VSTW). The dynamic FDP (DFDP) method is named EBW-VSTW-SVM, which keeps updating the FDP model dynamically with time goes on and only needs the historic financial data of the modelling enterprise itself and thus is easier to be embedded into enterprise information systems. The DFDP method of EBW-VSTW-SVM consists of four steps, namely evaluation of vertical relative financial distress (VRFD) based on EBW, construction of training data-set for DFDP modelling according to VSTW, training of DFDP model based on SVM and DFDP for the future time point. We carry out case studies for two listed pharmaceutical companies and experimental analysis for some other companies to simulate the sliding of enterprise vertical time window. The results indicated that the proposed approach was feasible and efficient to help managers improve corporate financial management.

  6. Testing the Digital Thread in Support of Model-Based Manufacturing and Inspection

    PubMed Central

    Hedberg, Thomas; Lubell, Joshua; Fischer, Lyle; Maggiano, Larry; Feeney, Allison Barnard

    2016-01-01

    A number of manufacturing companies have reported anecdotal evidence describing the benefits of Model-Based Enterprise (MBE). Based on this evidence, major players in industry have embraced a vision to deploy MBE. In our view, the best chance of realizing this vision is the creation of a single “digital thread.” Under MBE, there exists a Model-Based Definition (MBD), created by the Engineering function, that downstream functions reuse to complete Model-Based Manufacturing and Model-Based Inspection activities. The ensemble of data that enables the combination of model-based definition, manufacturing, and inspection defines this digital thread. Such a digital thread would enable real-time design and analysis, collaborative process-flow development, automated artifact creation, and full-process traceability in a seamless real-time collaborative development among project participants. This paper documents the strengths and weaknesses in the current, industry strategies for implementing MBE. It also identifies gaps in the transition and/or exchange of data between various manufacturing processes. Lastly, this paper presents measured results from a study of model-based processes compared to drawing-based processes and provides evidence to support the anecdotal evidence and vision made by industry. PMID:27325911

  7. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    PubMed

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  9. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  10. State-of-the-Art Review on Physiologically Based Pharmacokinetic Modeling in Pediatric Drug Development.

    PubMed

    Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T

    2018-05-18

    Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.

  11. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  12. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  13. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  14. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  15. A perturbative approach for enhancing the performance of time series forecasting.

    PubMed

    de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C

    2017-04-01

    This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Sonographically guided intrasheath percutaneous release of the first annular pulley for trigger digits, part 2: randomized comparative study of the economic impact of 3 surgical models.

    PubMed

    Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero

    2012-03-01

    Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.

  17. Crowd evacuation model based on bacterial foraging algorithm

    NASA Astrophysics Data System (ADS)

    Shibiao, Mu; Zhijun, Chen

    To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.

  18. Space-time latent component modeling of geo-referenced health data.

    PubMed

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  19. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  20. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  1. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  2. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  3. Development of a dynamic framework to explain population patterns of leisure-time physical activity through agent-based modeling.

    PubMed

    Garcia, Leandro M T; Diez Roux, Ana V; Martins, André C R; Yang, Yong; Florindo, Alex A

    2017-08-22

    Despite the increasing body of evidences on the factors influencing leisure-time physical activity, our understanding of the mechanisms and interactions that lead to the formation and evolution of population patterns is still limited. Moreover, most frameworks in this field fail to capture dynamic processes. Our aim was to create a dynamic conceptual model depicting the interaction between key psychological attributes of individuals and main aspects of the built and social environments in which they live. This conceptual model will inform and support the development of an agent-based model aimed to explore how population patterns of LTPA in adults may emerge from the dynamic interplay between psychological traits and built and social environments. We integrated existing theories and models as well as available empirical data (both from literature reviews), and expert opinions (based on a systematic expert assessment of an intermediary version of the model). The model explicitly presents intention as the proximal determinant of leisure-time physical activity, a relationship dynamically moderated by the built environment (access, quality, and available activities) - with the strength of the moderation varying as a function of the person's intention- and influenced both by the social environment (proximal network's and community's behavior) and the person's behavior. Our conceptual model is well supported by evidence and experts' opinions and will inform the design of our agent-based model, as well as data collection and analysis of future investigations on population patterns of leisure-time physical activity among adults.

  4. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  5. Investigation of molten pool oscillation during GMAW-P process based on a 3D model

    NASA Astrophysics Data System (ADS)

    Wang, L. L.; Lu, F. G.; Cui, H. C.; Tang, X. H.

    2014-11-01

    In order to better reveal the oscillation mechanism of the pulsed gas metal arc welding (GMAW-P) process due to an alternately varied welding current, arc plasma and molten pool oscillation were simulated through a self-consistent three-dimensional model. Based on an experimental analysis of the dynamic variation of the arc plasma and molten pool captured by a high-speed camera, the model was validated by comparison of the measured and predicted results. The calculated results showed that arc pressure was the key factor causing the molten pool to oscillate. The variation in arc size and temperature from peak time to base time resulted in a great difference in the heat input and arc pressure acting on the molten pool. The surface deformation of the molten pool due to the varying degrees of arc pressure induced alternate displacement and backflow in the molten metal. The periodic iteration of deeper and shallower surface deformation, drain and backflow of molten metal caused the molten pool to oscillate at a certain frequency. In this condition, the arc pressure at the peak time is more than six times higher than that at the base time, and the maximum surface depression is 1.4 mm and 0.6 mm, respectively, for peak time and base time.

  6. Hydrothermal time models for conidial germination and mycelial growth of the seed pathogen Pyrenophora semeniperda

    Treesearch

    Connor W. Barth; Susan E. Meyer; Julie Beckstead; Phil S. Allen

    2015-01-01

    Population-based threshold models using hydrothermal time (HTT) have been widely used to model seed germination. We used HTT to model conidial germination and mycelial growth for the seed pathogen Pyrenophora semeniperda in a novel approach to understanding its interactions with host seeds. Germination time courses and mycelial growth rates for P.semeniperda were...

  7. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  8. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  9. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  10. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  11. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  12. The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation

    NASA Astrophysics Data System (ADS)

    Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell

    2018-02-01

    The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.

  13. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  14. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  15. Real Time Data Management for Estimating Probabilities of Incidents and Near Misses

    NASA Astrophysics Data System (ADS)

    Stanitsas, P. D.; Stephanedes, Y. J.

    2011-08-01

    Advances in real-time data collection, data storage and computational systems have led to development of algorithms for transport administrators and engineers that improve traffic safety and reduce cost of road operations. Despite these advances, problems in effectively integrating real-time data acquisition, processing, modelling and road-use strategies at complex intersections and motorways remain. These are related to increasing system performance in identification, analysis, detection and prediction of traffic state in real time. This research develops dynamic models to estimate the probability of road incidents, such as crashes and conflicts, and incident-prone conditions based on real-time data. The models support integration of anticipatory information and fee-based road use strategies in traveller information and management. Development includes macroscopic/microscopic probabilistic models, neural networks, and vector autoregressions tested via machine vision at EU and US sites.

  16. Effect of Time Varying Gravity on DORIS processing for ITRF2013

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.

    2013-12-01

    Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.

  17. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  18. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  19. Second Generation Weather Impacts Decision Aid User’s Manual

    DTIC Science & Technology

    2013-09-01

    from the pulldown Base Reference Time menu. Most models start at times based on Coordinated Universal Time (UTC) or Zulu time (Z) with the selections...Effects Matrix Z Zulu time 31 No. of Copies Organization 1 DEFENSE TECHNICAL (PDF) INFORMATION CTR DTIC OCA 1 DIRECTOR (PDF) US

  20. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  1. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  2. Spatio-temporal population estimates for risk management

    NASA Astrophysics Data System (ADS)

    Cockings, Samantha; Martin, David; Smith, Alan; Martin, Rebecca

    2013-04-01

    Accurate estimation of population at risk from hazards and effective emergency management of events require not just appropriate spatio-temporal modelling of hazards but also of population. While much recent effort has been focused on improving the modelling and predictions of hazards (both natural and anthropogenic), there has been little parallel advance in the measurement or modelling of population statistics. Different hazard types occur over diverse temporal cycles, are of varying duration and differ significantly in their spatial extent. Even events of the same hazard type, such as flood events, vary markedly in their spatial and temporal characteristics. Conceptually and pragmatically then, population estimates should also be available for similarly varying spatio-temporal scales. Routine population statistics derived from traditional censuses or surveys are usually static representations in both space and time, recording people at their place of usual residence on census/survey night and presenting data for administratively defined areas. Such representations effectively fix the scale of population estimates in both space and time, which is unhelpful for meaningful risk management. Over recent years, the Pop24/7 programme of research, based at the University of Southampton (UK), has developed a framework for spatio-temporal modelling of population, based on gridded population surfaces. Based on a data model which is fully flexible in terms of space and time, the framework allows population estimates to be produced for any time slice relevant to the data contained in the model. It is based around a set of origin and destination centroids, which have capacities, spatial extents and catchment areas, all of which can vary temporally, such as by time of day, day of week, season. A background layer, containing information on features such as transport networks and landuse, provides information on the likelihood of people being in certain places at specific times. Unusual patterns associated with special events can also be modelled and the framework is fully volume preserving. Outputs from the model are gridded population surfaces for the specified time slice, either for total population or by sub-groups (e.g. age). Software to implement the models (SurfaceBuilder247) has been developed and pre-processed layers for typical time slices for England and Wales in 2001 and 2006 are available for UK academic purposes. The outputs and modelling framework from the Pop24/7 programme provide significant opportunities for risk management applications. For estimates of mid- to long-term cumulative population exposure to hazards, such as in flood risk mapping, populations can be produced for numerous time slices and integrated with flood models. For applications in emergency response/ management, time-specific population models can be used as seeds for agent-based models or other response/behaviour models. Estimates for sub-groups of the population also permit exploration of vulnerability through space and time. This paper outlines the requirements for effective spatio-temporal population models for risk management. It then describes the Pop24/7 framework and illustrates its potential for risk management through presentation of examples from natural and anthropogenic hazard applications. The paper concludes by highlighting key challenges for future research in this area.

  3. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Jim Bouchard

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less

  5. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  6. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  7. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  8. Detecting and modelling delayed density-dependence in abundance time series of a small mammal (Didelphis aurita)

    NASA Astrophysics Data System (ADS)

    Brigatti, E.; Vieira, M. V.; Kajin, M.; Almeida, P. J. A. L.; de Menezes, M. A.; Cerqueira, R.

    2016-02-01

    We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements.

  9. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  10. Modeling of Engine Parameters for Condition-Based Maintenance of the MTU Series 2000 Diesel Engine

    DTIC Science & Technology

    2016-09-01

    are suitable. To model the behavior of the engine, an autoregressive distributed lag (ARDL) time series model of engine speed and exhaust gas... time series model of engine speed and exhaust gas temperature is derived. The lag length for ARDL is determined by whitening of residuals using the...15 B. REGRESSION ANALYSIS ....................................................................15 1. Time Series Analysis

  11. Reduced rank models for travel time estimation of low order mode pulses.

    PubMed

    Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M

    2013-10-01

    Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.

  12. The Two Modes of Distance Education.

    ERIC Educational Resources Information Center

    Keegan, Desmond

    1998-01-01

    Discusses two models of distance-education, group-based versus individual-based. Highlights include group-based distance education for full-time and part-time students; individual-based distance education with pre-prepared materials and without pre-prepared materials; and distance education management and research. (LRW)

  13. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  14. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  15. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires.

    PubMed

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas

    2016-05-01

    We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.

  16. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  17. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  18. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  19. Riemannian multi-manifold modeling and clustering in brain networks

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.

    2017-08-01

    This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.

  20. Applying cost accounting to operating room staffing in otolaryngology: time-driven activity-based costing and outpatient adenotonsillectomy.

    PubMed

    Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M

    2015-04-01

    (1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  1. Forecast and Specification of Radiation Belt Electrons Based on Solar Wind Measurements

    NASA Astrophysics Data System (ADS)

    Li, X.; Barker, A.; Burin Des Roziers, E.

    2003-12-01

    Relativistic electrons in the Earth's magnetosphere are of considerable practical importance because of their effect on spacecraft and because of their radiation hazard to astronauts who perform extravehicular activity. The good correlation between solar wind velocity and MeV electron fluxes at geosynchronous orbit has long been established. We have developed a radial diffusion model, using solar wind parameters as the only input, to reproduce the variation of the MeV electrons at geosynchronous orbit. Based on this model, we have constructed a real-time model that forecasts one to two days in advance the daily averaged >2 MeV electron flux at geosynchronous orbit using real-time solar wind data from ACE. The forecasts from this model are available on the web in real time. A natural extension of our current model is to create a system for making quantitative forecasts and specifications of radiation belt electrons at different radial distances and different local times based on the solar wind conditions. The successes and obstacles associated with this extension will be discussed in this presentation.

  2. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less

  3. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  4. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  5. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  6. Stochastic simulation of human pulmonary blood flow and transit time frequency distribution based on anatomic and elasticity data.

    PubMed

    Huang, Wei; Shi, Jun; Yen, R T

    2012-12-01

    The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.

  7. Adaptive Automation Design and Implementation

    DTIC Science & Technology

    2015-09-17

    Study : Space Navigator This section demonstrates the player modeling paradigm, focusing specifically on the response generation section of the player ...human-machine system, a real-time player modeling framework for imitating a specific person’s task performance, and the Adaptive Automation System...Model . . . . . . . . . . . . . . . . . . . . . . . 13 Clustering-Based Real-Time Player Modeling . . . . . . . . . . . . . . . . . . . . . . 15 An

  8. Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Matthaeus, William H.

    1990-01-01

    A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.

  9. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  11. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  12. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  13. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    PubMed

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. TK Modeler version 1.0, a Microsoft® Excel®-based modeling software for the prediction of diurnal blood/plasma concentration for toxicokinetic use.

    PubMed

    McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A

    2012-07-01

    TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  16. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  17. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  18. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  19. A PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODEL FOR intravenous and ingested DIMETHYLARSINIC ACID (DMAV) IN MICE.

    EPA Science Inventory

    A physiologically based pharmacokinetic (PBPK) model for the organoarsenical dimethylarsinic acid (DMA(V)) was developed in mice. The model was calibrated using tissue time course data from multiple tissues in mice administered DMA(V) intravenously. The final model structure was ...

  20. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  1. Anomalous diffusion for bed load transport with a physically-based model

    NASA Astrophysics Data System (ADS)

    Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.

    2013-12-01

    Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.

  2. Modeling specific action potentials in the human atria based on a minimal single-cell model.

    PubMed

    Richter, Yvonne; Lind, Pedro G; Maass, Philipp

    2018-01-01

    We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.

  3. Integration of Dynamic Models in Range Operations

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge; Thirumalainambi, Rajkumar

    2004-01-01

    This work addresses the various model interactions in real-time to make an efficient internet based decision making tool for Shuttle launch. The decision making tool depends on the launch commit criteria coupled with physical models. Dynamic interaction between a wide variety of simulation applications and techniques, embedded algorithms, and data visualizations are needed to exploit the full potential of modeling and simulation. This paper also discusses in depth details of web based 3-D graphics and applications to range safety. The advantages of this dynamic model integration are secure accessibility and distribution of real time information to other NASA centers.

  4. The effect of area size and predation on the time to extinction of prairie vole populations. simulation studies via SERDYCA: a Spatially-Explicit Individual-Based Model of Rodent Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova, T; Carlsen, T

    2003-11-21

    We present a spatially-explicit individual-based computational model of rodent dynamics, customized for the prairie vole species, M. Ochrogaster. The model is based on trophic relationships and represents important features such as territorial competition, mating behavior, density-dependent predation and dispersal out of the modeled spatial region. Vegetation growth and vole fecundity are dependent on climatic components. The results of simulations show that the model correctly predicts the overall temporal dynamics of the population density. Time-series analysis shows a very good match between the periods corresponding to the peak population density frequencies predicted by the model and the ones reported in themore » literature. The model is used to study the relation between persistence, landscape area and predation. We introduce the notions of average time to extinction (ATE) and persistence frequency to quantify persistence. While the ATE decreases with decrease of area, it is a bell-shaped function of the predation level: increasing for 'small' and decreasing for 'large' predation levels.« less

  5. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    NASA Astrophysics Data System (ADS)

    Schlechtingen, Meik; Ferreira Santos, Ilmar

    2011-07-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal behavior model is compared to two artificial neural network based approaches, which are a full signal reconstruction and an autoregressive normal behavior model. Based on a real time series containing two generator bearing damages the capabilities of identifying the incipient fault prior to the actual failure are investigated. The period after the first bearing damage is used to develop the three normal behavior models. The developed or trained models are used to investigate how the second damage manifests in the prediction error. Furthermore the full signal reconstruction and the autoregressive approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies. The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time after first indication of damage. The general nonlinear neural network approaches outperform the regression model. The remaining seasonality in the regression model prediction error makes it difficult to detect abnormality and leads to increased alarm levels and thus a shorter remaining operational period. For the bearing damages and the stator anomalies under investigation the full signal reconstruction neural network gave the best fault visibility and thus led to the highest confidence level.

  6. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Spatial Modeling for Resources Framework (SMRF): A modular framework for developing spatial forcing data for snow modeling in mountain basins

    NASA Astrophysics Data System (ADS)

    Havens, Scott; Marks, Danny; Kormos, Patrick; Hedrick, Andrew

    2017-12-01

    In the Western US and many mountainous regions of the world, critical water resources and climate conditions are difficult to monitor because the observation network is generally very sparse. The critical resource from the mountain snowpack is water flowing into streams and reservoirs that will provide for irrigation, flood control, power generation, and ecosystem services. Water supply forecasting in a rapidly changing climate has become increasingly difficult because of non-stationary conditions. In response, operational water supply managers have begun to move from statistical techniques towards the use of physically based models. As we begin to transition physically based models from research to operational use, we must address the most difficult and time-consuming aspect of model initiation: the need for robust methods to develop and distribute the input forcing data. In this paper, we present a new open source framework, the Spatial Modeling for Resources Framework (SMRF), which automates and simplifies the common forcing data distribution methods. It is computationally efficient and can be implemented for both research and operational applications. We present an example of how SMRF is able to generate all of the forcing data required to a run physically based snow model at 50-100 m resolution over regions of 1000-7000 km2. The approach has been successfully applied in real time and historical applications for both the Boise River Basin in Idaho, USA and the Tuolumne River Basin in California, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of a physically based snow model possible.

  8. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  9. Development of an irrigation scheduling software based on model predicted crop water stress

    USDA-ARS?s Scientific Manuscript database

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  10. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  11. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  12. Spectral analysis for nonstationary and nonlinear systems: a discrete-time-model-based approach.

    PubMed

    He, Fei; Billings, Stephen A; Wei, Hua-Liang; Sarrigiannis, Ptolemaios G; Zhao, Yifan

    2013-08-01

    A new frequency-domain analysis framework for nonlinear time-varying systems is introduced based on parametric time-varying nonlinear autoregressive with exogenous input models. It is shown how the time-varying effects can be mapped to the generalized frequency response functions (FRFs) to track nonlinear features in frequency, such as intermodulation and energy transfer effects. A new mapping to the nonlinear output FRF is also introduced. A simulated example and the application to intracranial electroencephalogram data are used to illustrate the theoretical results.

  13. Continuing Development of a Hybrid Model (VSH) of the Neutral Thermosphere

    NASA Technical Reports Server (NTRS)

    Burns, Alan

    1996-01-01

    We propose to continue the development of a new operational model of neutral thermospheric density, composition, temperatures and winds to improve current engineering environment definitions of the neutral thermosphere. This model will be based on simulations made with the National Center for Atmospheric Research (NCAR) Thermosphere-Ionosphere- Electrodynamic General Circulation Model (TIEGCM) and on empirical data. It will be capable of using real-time geophysical indices or data from ground-based and satellite inputs and provides neutral variables at specified locations and times. This "hybrid" model will be based on a Vector Spherical Harmonic (VSH) analysis technique developed (over the last 8 years) at the University of Michigan that permits the incorporation of the TIGCM outputs and data into the model. The VSH model will be a more accurate version of existing models of the neutral thermospheric, and will thus improve density specification for satellites flying in low Earth orbit (LEO).

  14. Phase-field-based multiple-relaxation-time lattice Boltzmann model for incompressible multiphase flows.

    PubMed

    Liang, H; Shi, B C; Guo, Z L; Chai, Z H

    2014-05-01

    In this paper, a phase-field-based multiple-relaxation-time lattice Boltzmann (LB) model is proposed for incompressible multiphase flow systems. In this model, one distribution function is used to solve the Chan-Hilliard equation and the other is adopted to solve the Navier-Stokes equations. Unlike previous phase-field-based LB models, a proper source term is incorporated in the interfacial evolution equation such that the Chan-Hilliard equation can be derived exactly and also a pressure distribution is designed to recover the correct hydrodynamic equations. Furthermore, the pressure and velocity fields can be calculated explicitly. A series of numerical tests, including Zalesak's disk rotation, a single vortex, a deformation field, and a static droplet, have been performed to test the accuracy and stability of the present model. The results show that, compared with the previous models, the present model is more stable and achieves an overall improvement in the accuracy of the capturing interface. In addition, compared to the single-relaxation-time LB model, the present model can effectively reduce the spurious velocity and fluctuation of the kinetic energy. Finally, as an application, the Rayleigh-Taylor instability at high Reynolds numbers is investigated.

  15. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    PubMed

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  16. Modeling bursts and heavy tails in human dynamics

    NASA Astrophysics Data System (ADS)

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(τw)˜τw-α with α=3/2 . The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α=1 . We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α=1 , the surface mail based communication belongs to the α=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  17. Modeling bursts and heavy tails in human dynamics.

    PubMed

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can handle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(tau(w)) approximately tau(w)(-alpha) with alpha=3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display alpha=1, the surface mail based communication belongs to the alpha=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  18. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  19. Flash flood forecasting using simplified hydrological models, radar rainfall forecasts and data assimilation

    NASA Astrophysics Data System (ADS)

    Smith, P. J.; Beven, K.; Panziera, L.

    2012-04-01

    The issuing of timely flood alerts may be dependant upon the ability to predict future values of water level or discharge at locations where observations are available. Catchments at risk of flash flooding often have a rapid natural response time, typically less then the forecast lead time desired for issuing alerts. This work focuses on the provision of short-range (up to 6 hours lead time) predictions of discharge in small catchments based on utilising radar forecasts to drive a hydrological model. An example analysis based upon the Verzasca catchment (Ticino, Switzerland) is presented. Parsimonious time series models with a mechanistic interpretation (so called Data-Based Mechanistic model) have been shown to provide reliable accurate forecasts in many hydrological situations. In this study such a model is developed to predict the discharge at an observed location from observed precipitation data. The model is shown to capture the snow melt response at this site. Observed discharge data is assimilated to improve the forecasts, of up to two hours lead time, that can be generated from observed precipitation. To generate forecasts with greater lead time ensemble precipitation forecasts are utilised. In this study the Nowcasting ORographic precipitation in the Alps (NORA) product outlined in more detail elsewhere (Panziera et al. Q. J. R. Meteorol. Soc. 2011; DOI:10.1002/qj.878) is utilised. NORA precipitation forecasts are derived from historical analogues based on the radar field and upper atmospheric conditions. As such, they avoid the need to explicitly model the evolution of the rainfall field through for example Lagrangian diffusion. The uncertainty in the forecasts is represented by characterisation of the joint distribution of the observed discharge, the discharge forecast using the (in operational conditions unknown) future observed precipitation and that forecast utilising the NORA ensembles. Constructing the joint distribution in this way allows the full historic record of data at the site to inform the predictive distribution. It is shown that, in part due to the limited availability of forecasts, the uncertainty in the relationship between the NORA based forecasts and other variates dominated the resulting predictive uncertainty.

  20. Road Risk Modeling and Cloud-Aided Safety-Based Route Planning.

    PubMed

    Li, Zhaojian; Kolmanovsky, Ilya; Atkins, Ella; Lu, Jianbo; Filev, Dimitar P; Michelini, John

    2016-11-01

    This paper presents a safety-based route planner that exploits vehicle-to-cloud-to-vehicle (V2C2V) connectivity. Time and road risk index (RRI) are considered as metrics to be balanced based on user preference. To evaluate road segment risk, a road and accident database from the highway safety information system is mined with a hybrid neural network model to predict RRI. Real-time factors such as time of day, day of the week, and weather are included as correction factors to the static RRI prediction. With real-time RRI and expected travel time, route planning is formulated as a multiobjective network flow problem and further reduced to a mixed-integer programming problem. A V2C2V implementation of our safety-based route planning approach is proposed to facilitate access to real-time information and computing resources. A real-world case study, route planning through the city of Columbus, Ohio, is presented. Several scenarios illustrate how the "best" route can be adjusted to favor time versus safety metrics.

  1. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  2. Bayesian Model Selection under Time Constraints

    NASA Astrophysics Data System (ADS)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  3. Reachability analysis of real-time systems using time Petri nets.

    PubMed

    Wang, J; Deng, Y; Xu, G

    2000-01-01

    Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.

  4. Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆

    PubMed Central

    Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank

    2013-01-01

    Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967

  5. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  6. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  7. A new physics-based modeling approach for tsunami-ionosphere coupling

    NASA Astrophysics Data System (ADS)

    Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Yang, Y.-M.; Deng, Y.; Mannucci, A. J.

    2015-06-01

    Tsunamis can generate gravity waves propagating upward through the atmosphere, inducing total electron content (TEC) disturbances in the ionosphere. To capture this process, we have implemented tsunami-generated gravity waves into the Global Ionosphere-Thermosphere Model (GITM) to construct a three-dimensional physics-based model WP (Wave Perturbation)-GITM. WP-GITM takes tsunami wave properties, including the wave height, wave period, wavelength, and propagation direction, as inputs and time-dependently characterizes the responses of the upper atmosphere between 100 km and 600 km altitudes. We apply WP-GITM to simulate the ionosphere above the West Coast of the United States around the time when the tsunami associated with the March 2011 Tohuku-Oki earthquke arrived. The simulated TEC perturbations agree with Global Positioning System observations reasonably well. For the first time, a fully self-consistent and physics-based model has reproduced the GPS-observed traveling ionospheric signatures of an actual tsunami event.

  8. Real-time flood forecasting

    USGS Publications Warehouse

    Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.

    2009-01-01

    Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.

  9. Linear parameter varying identification of ankle joint intrinsic stiffness during imposed walking movements.

    PubMed

    Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E

    2013-01-01

    This paper describes a novel model structure and identification method for the time-varying, intrinsic stiffness of human ankle joint during imposed walking (IW) movements. The model structure is based on the superposition of a large signal, linear, time-invariant (LTI) model and a small signal linear-parameter varying (LPV) model. The methodology is based on a two-step algorithm; the LTI model is first estimated using data from an unperturbed IW trial. Then, the LPV model is identified using data from a perturbed IW trial with the output predictions of the LTI model removed from the measured torque. Experimental results demonstrate that the method accurately tracks the continuous-time variation of normal ankle intrinsic stiffness when the joint position changes during the IW movement. Intrinsic stiffness gain decreases from full plantarflexion to near the mid-point of plantarflexion and then increases substantially as the ankle is dosriflexed.

  10. Impedance-based overcharging and gassing model for VRLA/AGM batteries

    NASA Astrophysics Data System (ADS)

    Thele, M.; Karden, E.; Surewaard, E.; Sauer, D. U.

    This paper presents for the first time an impedance-based non-linear model for lead-acid batteries that is applicable in all operational modes. An overcharging model describes the accumulation and depletion of the dissolved Pb 2+ ions. This physical model has been added to the earlier presented model to expand the model validity. To properly represent the charge acceptance during dynamic operation, a concept of "hardening crystals" has been introduced in the model. Moreover, a detailed gassing and oxygen recombination model has been integrated. A realistic simulation of the overcharging behavior is now possible. The mathematical description is given in the paper. Simplifications are introduced that allow for an efficient implementation and for model parameterization in the time domain. A comparison between experimental data and simulation results demonstrates the achieved accuracy. The model enhancement is of major importance to analyze charging strategies especially in partial-cycling operation with limited charging time, e.g. in electrically assisted or hybrid cars and autonomous power supply systems.

  11. Assessing spatiotemporal changes in forest carbon turnover times in observational data and models

    NASA Astrophysics Data System (ADS)

    Yu, K.; Smith, W. K.; Trugman, A. T.; van Mantgem, P.; Peng, C.; Condit, R.; Anderegg, W.

    2017-12-01

    Forests influence global carbon and water cycles, biophysical land-atmosphere feedbacks, and atmospheric composition. The capacity of forests to sequester atmospheric CO2 in a changing climate depends not only on the response of carbon uptake (i.e., gross primary productivity) but also on the simultaneous change in carbon residence time. However, changes in carbon residence with climate change are uncertain, impacting the accuracy of predictions of future terrestrial carbon cycle dynamics. Here, we use long-term forest inventory data representative of tropical, temperate, and boreal forests; satellite-based estimates of net primary productivity and vegetation carbon stock; and six models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to investigate spatiotemporal trends in carbon residence time and its relation to climate. Forest inventory and satellite-based estimates of carbon residence time show a pervasive decreasing trend across global forests. In contrast, the CMIP5 models diverge in predicting historical and future trends in carbon residence time. Divergence across CMIP5 models indicate carbon turnover times are not well constrained by observations, which likely contributes to large variability in future carbon cycle projections.

  12. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  13. Inference of Gene Regulatory Networks Incorporating Multi-Source Biological Knowledge via a State Space Model with L1 Regularization

    PubMed Central

    Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya

    2014-01-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information. PMID:25162401

  14. Research of Manufacture Time Management System Based on PLM

    NASA Astrophysics Data System (ADS)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  15. Model-based prediction of nephropathia epidemica outbreaks based on climatological and vegetation data and bank vole population dynamics.

    PubMed

    Haredasht, S Amirpour; Taylor, C J; Maes, P; Verstraeten, W W; Clement, J; Barrios, M; Lagrou, K; Van Ranst, M; Coppin, P; Berckmans, D; Aerts, J-M

    2013-11-01

    Wildlife-originated zoonotic diseases in general are a major contributor to emerging infectious diseases. Hantaviruses more specifically cause thousands of human disease cases annually worldwide, while understanding and predicting human hantavirus epidemics pose numerous unsolved challenges. Nephropathia epidemica (NE) is a human infection caused by Puumala virus, which is naturally carried and shed by bank voles (Myodes glareolus). The objective of this study was to develop a method that allows model-based predicting 3 months ahead of the occurrence of NE epidemics. Two data sets were utilized to develop and test the models. These data sets were concerned with NE cases in Finland and Belgium. In this study, we selected the most relevant inputs from all the available data for use in a dynamic linear regression (DLR) model. The number of NE cases in Finland were modelled using data from 1996 to 2008. The NE cases were predicted based on the time series data of average monthly air temperature (°C) and bank voles' trapping index using a DLR model. The bank voles' trapping index data were interpolated using a related dynamic harmonic regression model (DHR). Here, the DLR and DHR models used time-varying parameters. Both the DHR and DLR models were based on a unified state-space estimation framework. For the Belgium case, no time series of the bank voles' population dynamics were available. Several studies, however, have suggested that the population of bank voles is related to the variation in seed production of beech and oak trees in Northern Europe. Therefore, the NE occurrence pattern in Belgium was predicted based on a DLR model by using remotely sensed phenology parameters of broad-leaved forests, together with the oak and beech seed categories and average monthly air temperature (°C) using data from 2001 to 2009. Our results suggest that even without any knowledge about hantavirus dynamics in the host population, the time variation in NE outbreaks in Finland could be predicted 3 months ahead with a 34% mean relative prediction error (MRPE). This took into account solely the population dynamics of the carrier species (bank voles). The time series analysis also revealed that climate change, as represented by the vegetation index, changes in forest phenology derived from satellite images and directly measured air temperature, may affect the mechanics of NE transmission. NE outbreaks in Belgium were predicted 3 months ahead with a 40% MRPE, based only on the climatological and vegetation data, in this case, without any knowledge of the bank vole's population dynamics. In this research, we demonstrated that NE outbreaks can be predicted using climate and vegetation data or the bank vole's population dynamics, by using dynamic data-based models with time-varying parameters. Such a predictive modelling approach might be used as a step towards the development of new tools for the prevention of future NE outbreaks. © 2012 Blackwell Verlag GmbH.

  16. Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turinsky, Paul

    This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can bemore » realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial meshing, corrected somewhat by employing effective fuel heat conduction values. The effectiveness of switching between the highest fidelity model and lower fidelity model as a function of time was assessed using the neutronics problem. Based upon work completed to date, one concludes that the time switching is effective in annealing out differences between the highest and lower fidelity solutions. The effectiveness of using a lower fidelity GPT solution, along with a prolongation operator, to estimate the QoI was also assessed. The utilization of a lower fidelity GPT solution was done in an attempt to avoid the high computational burden associated with solving for the highest fidelity GPT solution. Based upon work completed to date, one concludes that the lower fidelity adjoint solution is not sufficiently accurate with regard to estimating the QoI; however, a formulation has been revealed that may provide a path for addressing this shortcoming.« less

  17. Scalable and responsive event processing in the cloud

    PubMed Central

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-01

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164

  18. Average inactivity time model, associated orderings and reliability properties

    NASA Astrophysics Data System (ADS)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  19. Semi-active control of magnetorheological elastomer base isolation system utilising learning-based inverse model

    NASA Astrophysics Data System (ADS)

    Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng

    2017-10-01

    Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.

  20. Modeling the Quiet Time Outflow Solution in the Polar Cap

    NASA Technical Reports Server (NTRS)

    Glocer, Alex

    2011-01-01

    We use the Polar Wind Outflow Model (PWOM) to study the geomagnetically quiet conditions in the polar cap during solar maximum, The PWOM solves the gyrotropic transport equations for O(+), H(+), and He(+) along several magnetic field lines in the polar region in order to reconstruct the full 3D solution. We directly compare our simulation results to the data based empirical model of Kitamura et al. [2011] of electron density, which is based on 63 months of Akebono satellite observations. The modeled ion and electron temperatures are also compared with a statistical compilation of quiet time data obtained by the EISCAT Svalbard Radar (ESR) and Intercosmos Satellites (Kitamura et al. [2011]). The data and model agree reasonably well. This study shows that photoelectrons play an important role in explaining the differences between sunlit and dark results, ion composition, as well as ion and electron temperatures of the quiet time polar wind solution. Moreover, these results provide validation of the PWOM's ability to model the quiet time ((background" solution.

  1. The Fixed-Links Model in Combination with the Polynomial Function as a Tool for Investigating Choice Reaction Time Data

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2006-01-01

    A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…

  2. Fully-automated identification of fish species based on otolith contour: using short-time Fourier transform and discriminant analysis (STFT-DA).

    PubMed

    Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching

    2016-01-01

    Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.

  3. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  4. Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?

    NASA Astrophysics Data System (ADS)

    Engdahl, N.

    2017-12-01

    Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.

  5. Quality of asthma care under different primary care models in Canada: a population-based study.

    PubMed

    To, Teresa; Guan, Jun; Zhu, Jingqin; Lougheed, M Diane; Kaplan, Alan; Tamari, Itamar; Stanbrook, Matthew B; Simatovic, Jacqueline; Feldman, Laura; Gershon, Andrea S

    2015-02-14

    Previous research has shown variations in quality of care and patient outcomes under different primary care models. The objective of this study was to use previously validated, evidence-based performance indicators to measure quality of asthma care over time and to compare quality of care between different primary care models. Data were obtained for years 2006 to 2010 from the Ontario Asthma Surveillance Information System, which uses health administrative databases to track individuals with asthma living in the province of Ontario, Canada. Individuals with asthma (n=1,813,922) were divided into groups based on the practice model of their primary care provider (i.e., fee-for-service, blended fee-for-service, blended capitation). Quality of asthma care was measured using six validated, evidence-based asthma care performance indicators. All of the asthma performance indicators improved over time within each of the primary care models. Compared to the traditional fee-for-service model, the blended fee-for-service and blended capitation models had higher use of spirometry for asthma diagnosis and monitoring, higher rates of inhaled corticosteroid prescription, and lower outpatient claims. Emergency department visits were lowest in the blended fee-for-service group. Quality of asthma care improved over time within each of the primary care models. However, the amount by which they improved differed between the models. The newer primary care models (i.e., blended fee-for-service, blended capitation) appear to provide better quality of asthma care compared to the traditional fee-for-service model.

  6. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    NASA Astrophysics Data System (ADS)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.

  7. Capability of a Mobile Monitoring System to Provide Real-Time Data Broadcasting and Near Real-Time Source Attribution

    NASA Astrophysics Data System (ADS)

    Erickson, M.; Olaguer, J.; Wijesinghe, A.; Colvin, J.; Neish, B.; Williams, J.

    2014-12-01

    It is becoming increasingly important to understand the emissions and health effects of industrial facilities. Many areas have no or limited sustained monitoring capabilities, making it difficult to quantify the major pollution sources affecting human health, especially in fence line communities. Developments in real-time monitoring and micro-scale modeling offer unique ways to tackle these complex issues. This presentation will demonstrate the capability of coupling real-time observations with micro-scale modeling to provide real-time information and near real-time source attribution. The Houston Advanced Research Center constructed the Mobile Acquisition of Real-time Concentrations (MARC) laboratory. MARC consists of a Ford E-350 passenger van outfitted with a Proton Transfer Reaction Mass Spectrometer (PTR-MS) and meteorological equipment. This allows for the fast measurement of various VOCs important to air quality. The data recorded from the van is uploaded to an off-site database and the information is broadcast to a website in real-time. This provides for off-site monitoring of MARC's observations, which allows off-site personnel to provide immediate input to the MARC operators on how to best achieve project objectives. The information stored in the database can also be used to provide near real-time source attribution. An inverse model has been used to ascertain the amount, location, and timing of emissions based on MARC measurements in the vicinity of industrial sites. The inverse model is based on a 3D micro-scale Eulerian forward and adjoint air quality model known as the HARC model. The HARC model uses output from the Quick Urban and Industrial Complex (QUIC) wind model and requires a 3D digital model of the monitored facility based on lidar or industrial permit data. MARC is one of the instrument platforms deployed during the 2014 Benzene and other Toxics Exposure Study (BEE-TEX) in Houston, TX. The main goal of the study is to quantify and explain the origin of ambient exposure to hazardous air pollutants in an industrial fence line community near the Houston Ship Channel. Preliminary results derived from analysis of MARC observations during the BEE-TEX experiment will be presented.

  8. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  9. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  10. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  11. Pan-European stochastic flood event set

    NASA Astrophysics Data System (ADS)

    Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav

    2017-04-01

    Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.

  12. Adaptive MPC based on MIMO ARX-Laguerre model.

    PubMed

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Time dependent data, time independent models: challenges of updating Australia's National Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.

    2017-12-01

    Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.

  14. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  15. A real-time spiking cerebellum model for learning robot control.

    PubMed

    Carrillo, Richard R; Ros, Eduardo; Boucheny, Christian; Coenen, Olivier J-M D

    2008-01-01

    We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.

  16. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  17. Analysis and stochastic modelling of Intensity-Duration-Frequency relationship from 88 years of 10 min rainfall data in North Spain

    NASA Astrophysics Data System (ADS)

    Delgado, Oihane; Campo-Bescós, Miguel A.; López, J. Javier

    2017-04-01

    Frequently, when we are trying to solve certain hydrological engineering problems, it is often necessary to know rain intensity values related to a specific probability or return period, T. Based on analyses of extreme rainfall events at different time scale aggregation, we can deduce the relationships among Intensity-Duration-Frequency (IDF), that are widely used in hydraulic infrastructure design. However, the lack of long time series of rainfall intensities for smaller time periods, minutes or hours, leads to use mathematical expressions to characterize and extend these curves. One way to deduce them is through the development of synthetic rainfall time series generated from stochastic models, which is evaluated in this work. From recorded accumulated rainfall time series every 10 min in the pluviograph of Igueldo (San Sebastian, Spain) for the time period between 1927-2005, their homogeneity has been checked and possible statistically significant increasing or decreasing trends have also been shown. Subsequently, two models have been calibrated: Bartlett-Lewis and Markov chains models, which are based on the successions of storms, composed for a series of rainfall events, separated by a short interval of time each. Finally, synthetic ten-minute rainfall time series are generated, which allow to estimate detailed IDF curves and compare them with the estimated IDF based on the recorded data.

  18. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  19. An updated concept of coagulation with clinical implications.

    PubMed

    Romney, Gregory; Glick, Michael

    2009-05-01

    Over the past century, a series of models have been put forth to explain the coagulation mechanism. The coagulation cascade/waterfall model has gained the most widespread acceptance. This model, however, has problems when it is used in different clinical scenarios. A more recently proposed cell-based model better describes the coagulation process in vivo and provides oral health care professionals (OHCPs) with a better understanding of the clinical implications of providing dental care to patients with potentially increased bleeding tendencies. The authors conducted a literature search using the PubMed database. They searched for key words including "coagulation," "hemostasis," "bleeding," "coagulation factors," "models," "prothrombin time," "activated partial thromboplastin time," "international normalized ratio," "anticoagulation therapy" and "hemophilia" separately and in combination. The coagulation cascade/waterfall model is insufficient to explain coagulation in vivo, predict a patient's bleeding tendency, or correlate clinical outcomes with specific laboratory screening tests such as prothrombin time, activated partial thromboplastin time and international normalized ratio. However, the cell-based model of coagulation that reflects the in vivo process of coagulation provides insight into the clinical ramifications of treating dental patients with specific coagulation factor deficiencies. Understanding the in vivo coagulation process will help OHCPs better predict a patient's bleeding tendency. In addition, applying the theoretical concept of the cell-based model of coagulation to commonly used laboratory screening tests for coagulation and bleeding will result in safer and more appropriate dental care.

  20. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  1. A Point Rainfall Generator With Internal Storm Structure

    NASA Astrophysics Data System (ADS)

    Marien, J. L.; Vandewiele, G. L.

    1986-04-01

    A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.

  2. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    NASA Astrophysics Data System (ADS)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.

  3. The dynamic influence of human resources on evidence-based intervention sustainability and population outcomes: an agent-based modeling approach.

    PubMed

    McKay, Virginia R; Hoffer, Lee D; Combs, Todd B; Margaret Dolcini, M

    2018-06-05

    Sustaining evidence-based interventions (EBIs) is an ongoing challenge for dissemination and implementation science in public health and social services. Characterizing the relationship among human resource capacity within an agency and subsequent population outcomes is an important step to improving our understanding of how EBIs are sustained. Although human resource capacity and population outcomes are theoretically related, examining them over time within real-world experiments is difficult. Simulation approaches, especially agent-based models, offer advantages that complement existing methods. We used an agent-based model to examine the relationships among human resources, EBI delivery, and population outcomes by simulating provision of an EBI through a hypothetical agency and its staff. We used data from existing studies examining a widely implemented HIV prevention intervention to inform simulation design, calibration, and validity. Once we developed a baseline model, we used the model as a simulated laboratory by systematically varying three human resource variables: the number of staff positions, the staff turnover rate, and timing in training. We tracked the subsequent influence on EBI delivery and the level of population risk over time to describe the overall and dynamic relationships among these variables. Higher overall levels of human resource capacity at an agency (more positions) led to more extensive EBI delivery over time and lowered population risk earlier in time. In simulations representing the typical human resource investments, substantial influences on population risk were visible after approximately 2 years and peaked around 4 years. Human resources, especially staff positions, have an important impact on EBI sustainability and ultimately population health. A minimum level of human resources based on the context (e.g., size of the initial population and characteristics of the EBI) is likely needed for an EBI to have a meaningful impact on population outcomes. Furthermore, this model demonstrates how ABMs may be leveraged to inform research design and assess the impact of EBI sustainability in practice.

  4. The Design of Model-Based Training Programs

    NASA Technical Reports Server (NTRS)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  5. Modelling road accidents: An approach using structural time series

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  6. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  7. Retrieving hydrological connectivity from empirical causality in karst systems

    NASA Astrophysics Data System (ADS)

    Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier

    2017-04-01

    Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.

  8. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  9. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    PubMed

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  10. Intelligent fuzzy controller for event-driven real time systems

    NASA Technical Reports Server (NTRS)

    Grantner, Janos; Patyra, Marek; Stachowicz, Marian S.

    1992-01-01

    Most of the known linguistic models are essentially static, that is, time is not a parameter in describing the behavior of the object's model. In this paper we show a model for synchronous finite state machines based on fuzzy logic. Such finite state machines can be used to build both event-driven, time-varying, rule-based systems and the control unit section of a fuzzy logic computer. The architecture of a pipelined intelligent fuzzy controller is presented, and the linguistic model is represented by an overall fuzzy relation stored in a single rule memory. A VLSI integrated circuit implementation of the fuzzy controller is suggested. At a clock rate of 30 MHz, the controller can perform 3 MFLIPS on multi-dimensional fuzzy data.

  11. Creep and Oxidation of Hafnium Diboride Based Ultra High Temperature Ceramics at 1500C

    DTIC Science & Technology

    2015-12-01

    through experimentation. Although the Literature Review showed that some theories and models have been developed based on extensive experimental results...of Some Refractory Metals & Ceramics [Fahrenholtz] ........... 14 Figure 4: Creep Strain vs Time Based on Burgers Model ...

  12. Comparison of different synthetic 5-min rainfall time series regarding their suitability for urban drainage modelling

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András

    2015-04-01

    For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.

  13. HMI Data Driven Magnetohydrodynamic Model Predicted Active Region Photospheric Heating Rates: Their Scale Invariant, Flare Like Power Law Distributions, and Their Possible Association With Flares

    NASA Technical Reports Server (NTRS)

    Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.

    2017-01-01

    There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.

  14. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  15. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  16. Dynamic, physical-based landslide susceptibility modelling based on real-time weather data

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Glade, Thomas

    2016-04-01

    By now there seem to be a broad consensus that due to human-induced global change the frequency and magnitude of precipitation intensities within extensive rainstorm events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as one of the most common triggers for landslide initiation, also an increased landside activity might be expected. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled by a variety of concepts, methods, and models. However, most of the research done with respect to landslides deals with retrospect cases, thus classical back-analysis approaches do not incorporate real-time data. This is remarkable, as most destructive landslides are related to immediate events due to external triggering factors. Only few works so far addressed real-time dynamic components for spatial landslide susceptibility and hazard assessment. Here we present an approach for integrating real-time web-based rainfall data from different sources into an automated workflow. Rain gauge measurements are interpolated into a continuous raster which in return is directly utilized in a dynamic, physical-based model. We use the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) model that was modified in a way that it is automatically updated with the most recent rainfall raster for producing hourly landslide susceptibility maps on a regional scale. To account for the uncertainties involved in spatial modelling, the model was further adjusted by not only applying single values for given geotechnical parameters, but ranges instead. The values are determined randomly between user-defined thresholds defining the parameter ranges. Consequently, a slope failure probability from a larger number of model runs is computed rather than just the distributed factor of safety. This will ultimately allow a near-real time spatial landslide alert for a given region.

  17. Determinants of perceived sleep quality in normal sleepers.

    PubMed

    Goelema, M S; Regis, M; Haakma, R; van den Heuvel, E R; Markopoulos, P; Overeem, S

    2017-09-20

    This study aimed to establish the determinants of perceived sleep quality over a longer period of time, taking into account the separate contributions of actigraphy-based sleep measures and self-reported sleep indices. Fifty participants (52 ± 6.6 years; 27 females) completed two consecutive weeks of home monitoring, during which they kept a sleep-wake diary while their sleep was monitored using a wrist-worn actigraph. The diary included questions on perceived sleep quality, sleep-wake information, and additional factors such as well-being and stress. The data were analyzed using multilevel models to compare a model that included only actigraphy-based sleep measures (model Acti) to a model that included only self-reported sleep measures to explain perceived sleep quality (model Self). In addition, a model based on the self-reported sleep measures and extended with nonsleep-related factors was analyzed to find the most significant determinants of perceived sleep quality (model Extended). Self-reported sleep measures (model Self) explained 61% of the total variance, while actigraphy-based sleep measures (model Acti) only accounted for 41% of the perceived sleep quality. The main predictors in the self-reported model were number of awakenings during the night, sleep onset latency, and wake time after sleep onset. In the extended model, the number of awakenings during the night and total sleep time of the previous night were the strongest determinants of perceived sleep quality, with 64% of the variance explained. In our cohort, perceived sleep quality was mainly determined by self-reported sleep measures and less by actigraphy-based sleep indices. These data further stress the importance of taking multiple nights into account when trying to understand perceived sleep quality.

  18. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  19. Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)

    NASA Technical Reports Server (NTRS)

    Christhilf, David m.; Bacon, Barton J.

    2006-01-01

    The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.

  20. A Spiking Neural Network Model of the Medial Superior Olive Using Spike Timing Dependent Plasticity for Sound Localization

    PubMed Central

    Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.

    2010-01-01

    Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855

  1. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim

    PubMed Central

    Modenese, L.; Lloyd, D.G.

    2017-01-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time. PMID:27723992

  2. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim.

    PubMed

    Pizzolato, C; Reggiani, M; Modenese, L; Lloyd, D G

    2017-03-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5 ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time.

  3. Model-based methods for case definitions from administrative health data: application to rheumatoid arthritis

    PubMed Central

    Kroeker, Kristine; Widdifield, Jessica; Muthukumarana, Saman; Jiang, Depeng; Lix, Lisa M

    2017-01-01

    Objective This research proposes a model-based method to facilitate the selection of disease case definitions from validation studies for administrative health data. The method is demonstrated for a rheumatoid arthritis (RA) validation study. Study design and setting Data were from 148 definitions to ascertain cases of RA in hospital, physician and prescription medication administrative data. We considered: (A) separate univariate models for sensitivity and specificity, (B) univariate model for Youden’s summary index and (C) bivariate (ie, joint) mixed-effects model for sensitivity and specificity. Model covariates included the number of diagnoses in physician, hospital and emergency department records, physician diagnosis observation time, duration of time between physician diagnoses and number of RA-related prescription medication records. Results The most common case definition attributes were: 1+ hospital diagnosis (65%), 2+ physician diagnoses (43%), 1+ specialist physician diagnosis (51%) and 2+ years of physician diagnosis observation time (27%). Statistically significant improvements in sensitivity and/or specificity for separate univariate models were associated with (all p values <0.01): 2+ and 3+ physician diagnoses, unlimited physician diagnosis observation time, 1+ specialist physician diagnosis and 1+ RA-related prescription medication records (65+ years only). The bivariate model produced similar results. Youden’s index was associated with these same case definition criteria, except for the length of the physician diagnosis observation time. Conclusion A model-based method provides valuable empirical evidence to aid in selecting a definition(s) for ascertaining diagnosed disease cases from administrative health data. The choice between univariate and bivariate models depends on the goals of the validation study and number of case definitions. PMID:28645978

  4. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  5. Analysis of Parametric Adaptive Signal Detection with Applications to Radars and Hyperspectral Imaging

    DTIC Science & Technology

    2010-02-01

    98 8.4.5 Training Screening ............................. .................................................................99 8.5 Experimental...associated with the proposed parametric model. Several im- portant issues are discussed, including model order selection, training screening , and time...parameters associated with the NS-AR model. In addition, we develop model order selection, training screening , and time-series based whitening and

  6. Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks

    PubMed Central

    Monares, Álvaro; Ochoa, Sergio F.; Santos, Rodrigo; Orozco, Javier; Meseguer, Roc

    2014-01-01

    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes. PMID:25157549

  7. Modeling IoT-based solutions using human-centric wireless sensor networks.

    PubMed

    Monares, Álvaro; Ochoa, Sergio F; Santos, Rodrigo; Orozco, Javier; Meseguer, Roc

    2014-08-25

    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes.

  8. A Tree Based Broadcast Scheme for (m, k)-firm Real-Time Stream in Wireless Sensor Networks.

    PubMed

    Park, HoSung; Kim, Beom-Su; Kim, Kyong Hoon; Shah, Babar; Kim, Ki-Il

    2017-11-09

    Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new ( m , k )-firm-based Real-time Broadcast Protocol (FRBP) by constructing a broadcast tree to satisfy the ( m , k )-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured ( m , k )-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption.

  9. Using metagenomic and metatranscriptomic observations to test a thermodynamic-based model of community metabolic expression over time and space

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Huber, J. A.

    2016-02-01

    Marine biogeochemistry is orchestrated by a complex and dynamic community of microorganisms that attempt to maximize their own fecundity through a combination of competition and cooperation. At a systems level, the community can be described as a distributed metabolic network, where different species contribute their own unique set of metabolic capabilities. Our current project attempts to understand the governing principles that describe amplification or attenuation of metabolic pathways within the network through a combination of modeling and metagenomic, metatranscriptomic and biogeochemical observations. We will describe and present results from our thermodynamic-based model that determines optimal pathway expression from available resources based on the principle of maximum entropy production (MEP); that is, based on the hypothesis that non-equilibrium systems organize to maximize energy dissipation. The MEP model currently predicts metabolic pathway expression over time, and one spatial dimension. Model predictions will be compared to biogeochemical observations and gene presence and expression from samples collected over time and space from a costal meromictic basin (Siders Pond) located in Falmouth MA, US. Siders Pond permanent stratification, caused by occasional seawater intrusion, results in steep chemoclines and redox gradients, which supports both aerobic and anaerobic phototrophs as well as sulfur, Fe and Mn redox cycles. The diversity of metabolic capability and expression we have observed over depth makes it an ideal system to test our thermodynamic-based model.

  10. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  11. Hybrid grammar-based approach to nonlinear dynamical system identification from biological time series

    NASA Astrophysics Data System (ADS)

    McKinney, B. A.; Crowe, J. E., Jr.; Voss, H. U.; Crooke, P. S.; Barney, N.; Moore, J. H.

    2006-02-01

    We introduce a grammar-based hybrid approach to reverse engineering nonlinear ordinary differential equation models from observed time series. This hybrid approach combines a genetic algorithm to search the space of model architectures with a Kalman filter to estimate the model parameters. Domain-specific knowledge is used in a context-free grammar to restrict the search space for the functional form of the target model. We find that the hybrid approach outperforms a pure evolutionary algorithm method, and we observe features in the evolution of the dynamical models that correspond with the emergence of favorable model components. We apply the hybrid method to both artificially generated time series and experimentally observed protein levels from subjects who received the smallpox vaccine. From the observed data, we infer a cytokine protein interaction network for an individual’s response to the smallpox vaccine.

  12. Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-10-01

    Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.

  13. Real-time GIS data model and sensor web service platform for environmental data management.

    PubMed

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  14. A hydrothermal after-ripening time model for seed dormancy loss in Bromus tectorum L.

    Treesearch

    Necia B. Bair; Susan E. Meyer; Phil S. Allen

    2006-01-01

    After-ripening, the loss of dormancy under dry conditions, is associated with a decrease in mean base water potential for germination of Bromus tectorum L. seeds. After-ripening rate is a linear function of temperature above a base temperature, so that dormancy loss can be quantified using a thermal after-ripening time (TAR) model. To incorporate storage water...

  15. Real-time pricing strategy of micro-grid energy centre considering price-based demand response

    NASA Astrophysics Data System (ADS)

    Xu, Zhiheng; Zhang, Yongjun; Wang, Gan

    2017-07-01

    With the development of energy conversion technology such as power to gas (P2G), fuel cell and so on, the coupling between energy sources becomes more and more closely. Centralized dispatch among electricity, natural gas and heat will become a trend. With the goal of maximizing the system revenue, this paper establishes the model of micro-grid energy centre based on energy hub. According to the proposed model, the real-time pricing strategy taking into account price-based demand response of load is developed. And the influence of real-time pricing strategy on the peak load shifting is discussed. In addition, the impact of wind power predicted inaccuracy on real-time pricing strategy is analysed.

  16. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  17. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  18. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    USDA-ARS?s Scientific Manuscript database

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  20. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  1. Creep Tests and Modeling Based on Continuum Damage Mechanics for T91 and T92 Steels

    NASA Astrophysics Data System (ADS)

    Pan, J. P.; Tu, S. H.; Zhu, X. W.; Tan, L. J.; Hu, B.; Wang, Q.

    2017-12-01

    9-11%Cr ferritic steels play an important role in high-temperature and high-pressure boilers of advanced power plants. In this paper, a continuum damage mechanics (CDM)-based creep model was proposed to study the creep behavior of T91 and T92 steels at high temperatures. Long-time creep tests were performed for both steels under different conditions. The creep rupture data and creep curves obtained from creep tests were captured well by theoretical calculation based on the CDM model over a long creep time. It is shown that the developed model is able to predict creep data for the two ferritic steels accurately up to tens of thousands of hours.

  2. A discrete mechanics framework for real time virtual surgical simulations with application to virtual laparoscopic nephrectomy.

    PubMed

    Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert

    2009-01-01

    The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.

  3. Does Sedentary Behavior Predict Academic Performance in Adolescents or the Other Way Round? A Longitudinal Path Analysis

    PubMed Central

    Lizandra, Jorge; Devís-Devís, José; Pérez-Gimeno, Esther; Valencia-Peris, Alexandra; Peiró-Velert, Carmen

    2016-01-01

    This study examined whether adolescents’ time spent on sedentary behaviors (academic, technological-based and social-based activities) was a better predictor of academic performance than the reverse. A cohort of 755 adolescents participated in a three-year period study. Structural Equation Modeling techniques were used to test plausible causal hypotheses. Four competing models were analyzed to determine which model best fitted the data. The Best Model was separately tested by gender. The Best Model showed that academic performance was a better predictor of sedentary behaviors than the other way round. It also indicated that students who obtained excellent academic results were more likely to succeed academically three years later. Moreover, adolescents who spent more time in the three different types of sedentary behaviors were more likely to engage longer in those sedentary behaviors after the three-year period. The better the adolescents performed academically, the less time they devoted to social-based activities and more to academic activities. An inverse relationship emerged between time dedicated to technological-based activities and academic sedentary activities. A moderating auto-regressive effect by gender indicated that boys were more likely to spend more time on technological-based activities three years later than girls. To conclude, previous academic performance predicts better sedentary behaviors three years later than the reverse. The positive longitudinal auto-regressive effects on the four variables under study reinforce the ‘success breeds success’ hypothesis, with academic performance and social-based activities emerging as the strongest ones. Technological-based activities showed a moderating effect by gender and a negative longitudinal association with academic activities that supports a displacement hypothesis. Other longitudinal and covariate effects reflect the complex relationships among sedentary behaviors and academic performance and the need to explore these relationships in depth. Theoretical and practical implications for school health are outlined. PMID:27055121

  4. Does Sedentary Behavior Predict Academic Performance in Adolescents or the Other Way Round? A Longitudinal Path Analysis.

    PubMed

    Lizandra, Jorge; Devís-Devís, José; Pérez-Gimeno, Esther; Valencia-Peris, Alexandra; Peiró-Velert, Carmen

    2016-01-01

    This study examined whether adolescents' time spent on sedentary behaviors (academic, technological-based and social-based activities) was a better predictor of academic performance than the reverse. A cohort of 755 adolescents participated in a three-year period study. Structural Equation Modeling techniques were used to test plausible causal hypotheses. Four competing models were analyzed to determine which model best fitted the data. The Best Model was separately tested by gender. The Best Model showed that academic performance was a better predictor of sedentary behaviors than the other way round. It also indicated that students who obtained excellent academic results were more likely to succeed academically three years later. Moreover, adolescents who spent more time in the three different types of sedentary behaviors were more likely to engage longer in those sedentary behaviors after the three-year period. The better the adolescents performed academically, the less time they devoted to social-based activities and more to academic activities. An inverse relationship emerged between time dedicated to technological-based activities and academic sedentary activities. A moderating auto-regressive effect by gender indicated that boys were more likely to spend more time on technological-based activities three years later than girls. To conclude, previous academic performance predicts better sedentary behaviors three years later than the reverse. The positive longitudinal auto-regressive effects on the four variables under study reinforce the 'success breeds success' hypothesis, with academic performance and social-based activities emerging as the strongest ones. Technological-based activities showed a moderating effect by gender and a negative longitudinal association with academic activities that supports a displacement hypothesis. Other longitudinal and covariate effects reflect the complex relationships among sedentary behaviors and academic performance and the need to explore these relationships in depth. Theoretical and practical implications for school health are outlined.

  5. Temporal variation and scaling of parameters for a monthly hydrologic model

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  6. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    NASA Astrophysics Data System (ADS)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  7. Using features of Arden Syntax with object-oriented medical data models for guideline modeling.

    PubMed

    Peleg, M; Ogunyemi, O; Tu, S; Boxwala, A A; Zeng, Q; Greenes, R A; Shortliffe, E H

    2001-01-01

    Computer-interpretable guidelines (CIGs) can deliver patient-specific decision support at the point of care. CIGs base their recommendations on eligibility and decision criteria that relate medical concepts to patient data. CIG models use expression languages for specifying these criteria, and define models for medical data to which the expressions can refer. In developing version 3 of the GuideLine Interchange Format (GLIF3), we used existing standards as the medical data model and expression language. We investigated the object-oriented HL7 Reference Information Model (RIM) as a default data model. We developed an expression language, called GEL, based on Arden Syntax's logic grammar. Together with other GLIF constructs, GEL reconciles incompatibilities between the data models of Arden Syntax and the HL7 RIM. These incompatibilities include Arden's lack of support for complex data types and time intervals, and the mismatch between Arden's single primary time and multiple time attributes of the HL7 RIM.

  8. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  9. Application of troposphere model from NWP and GNSS data into real-time precise positioning

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw

    2016-04-01

    The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.

  10. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  11. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  12. An Agent-Based Model for Analyzing Control Policies and the Dynamic Service-Time Performance of a Capacity-Constrained Air Traffic Management Facility

    NASA Technical Reports Server (NTRS)

    Conway, Sheila R.

    2006-01-01

    Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.

  13. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  14. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  15. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  16. A Novel Framework for Characterizing Exposure-Related Behaviors Using Agent-Based Models Embedded with Needs-Based Artificial Intelligence (CSSSA2016)

    EPA Science Inventory

    Descriptions of where and how individuals spend their time are important for characterizing exposures to chemicals in consumer products and in indoor environments. Herein we create an agent-based model (ABM) that is able to simulate longitudinal patterns in behaviors. By basing o...

  17. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  18. Construction schedule simulation of a diversion tunnel based on the optimized ventilation time.

    PubMed

    Wang, Xiaoling; Liu, Xuepeng; Sun, Yuefeng; An, Juan; Zhang, Jing; Chen, Hongchao

    2009-06-15

    Former studies, the methods for estimating the ventilation time are all empirical in construction schedule simulation. However, in many real cases of construction schedule, the many factors have impact on the ventilation time. Therefore, in this paper the 3D unsteady quasi-single phase models are proposed to optimize the ventilation time with different tunneling lengths. The effect of buoyancy is considered in the momentum equation of the CO transport model, while the effects of inter-phase drag, lift force, and virtual mass force are taken into account in the momentum source of the dust transport model. The prediction by the present model for airflow in a diversion tunnel is confirmed by the experimental values reported by Nakayama [Nakayama, In-situ measurement and simulation by CFD of methane gas distribution at a heading faces, Shigen-to-Sozai 114 (11) (1998) 769-775]. The construction ventilation of the diversion tunnel of XinTangfang power station in China is used as a case. The distributions of airflow, CO and dust in the diversion tunnel are analyzed. A theory method for GIS-based dynamic visual simulation for the construction processes of underground structure groups is presented that combines cyclic operation network simulation, system simulation, network plan optimization, and GIS-based construction processes' 3D visualization. Based on the ventilation time the construction schedule of the diversion tunnel is simulated by the above theory method.

  19. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  20. [Human resources requirements for diabetic patients healthcare in primary care clinics of the Mexican Institute of Social Security].

    PubMed

    Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo

    2013-12-01

    To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.

  1. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  2. Simulation platform of LEO satellite communication system based on OPNET

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhang, Yong; Li, Xiaozhuo; Wang, Chuqiao; Li, Haihao

    2018-02-01

    For the purpose of verifying communication protocol in the low earth orbit (LEO) satellite communication system, an Optimized Network Engineering Tool (OPNET) based simulation platform is built. Using the three-layer modeling mechanism, the network model, the node model and the process model of the satellite communication system are built respectively from top to bottom, and the protocol will be implemented by finite state machine and Proto-C language. According to satellite orbit parameters, orbit files are generated via Satellite Tool Kit (STK) and imported into OPNET, and the satellite nodes move along their orbits. The simulation platform adopts time-slot-driven mode, divides simulation time into continuous time slots, and allocates slot number for each time slot. A resource allocation strategy is simulated on this platform, and the simulation results such as resource utilization rate, system throughput and packet delay are analyzed, which indicate that this simulation platform has outstanding versatility.

  3. A structurally based analytic model of growth and biomass dynamics in single species stands of conifers

    Treesearch

    Robin J. Tausch

    2015-01-01

    A theoretically based analytic model of plant growth in single species conifer communities based on the species fully occupying a site and fully using the site resources is introduced. Model derivations result in a single equation simultaneously describes changes over both, different site conditions (or resources available), and over time for each variable for each...

  4. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  5. IGBT Switching Characteristic Curve Embedded Half-Bridge MMC Modelling and Real Time Simulation Realization

    NASA Astrophysics Data System (ADS)

    Zhengang, Lu; Hongyang, Yu; Xi, Yang

    2017-05-01

    The Modular Multilevel Converter (MMC) is one of the most attractive topologies in recent years for medium or high voltage industrial applications, such as high voltage dc transmission (HVDC) and medium voltage varying speed motor drive. The wide adoption of MMCs in industry is mainly due to its flexible expandability, transformer-less configuration, common dc bus, high reliability from redundancy, and so on. But, when the sub module number of MMC is more, the test of MMC controller will cost more time and effort. Hardware in the loop test based on real time simulator will save a lot of time and money caused by the MMC test. And due to the flexible of HIL, it becomes more and more popular in the industry area. The MMC modelling method remains an important issue for the MMC HIL test. Specifically, the VSC model should realistically reflect the nonlinear device switching characteristics, switching and conduction losses, tailing current, and diode reverse recovery behaviour of a realistic converter. In this paper, an IGBT switching characteristic curve embedded half-bridge MMC modelling method is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the proposed method, a FPGA real time simulation is carried out with 200ns sample time. The real time simulation results show the proposed method is correct.

  6. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  7. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  8. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  9. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields.

    PubMed

    Whitford, Paul C; Noel, Jeffrey K; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y; Onuchic, José N

    2009-05-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Go) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase, and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a C(alpha) structure-based model and an all-atom empirical forcefield. Key findings include: (1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature, (2) folding mechanisms are robust to variations of the energetic parameters, (3) protein folding free-energy barriers can be manipulated through parametric modifications, (4) the global folding mechanisms in a C(alpha) model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model, and (5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Because this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function.

  10. An All-atom Structure-Based Potential for Proteins: Bridging Minimal Models with All-atom Empirical Forcefields

    PubMed Central

    Whitford, Paul C.; Noel, Jeffrey K.; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y.; Onuchic, José N.

    2012-01-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Gō) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a Cα structure-based model and an all-atom empirical forcefield. Key findings include 1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature 2) folding mechanisms are robust to variations of the energetic parameters 3) protein folding free energy barriers can be manipulated through parametric modifications 4) the global folding mechanisms in a Cα model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model 5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Since this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function. PMID:18837035

  11. Expression for time travel based on diffusive wave theory: applicability and considerations

    NASA Astrophysics Data System (ADS)

    Aguilera, J. C.; Escauriaza, C. R.; Passalacqua, P.; Gironas, J. A.

    2017-12-01

    Prediction of hydrological response is of utmost importance when dealing with urban planning, risk assessment, or water resources management issues. With the advent of climate change, special care must be taken with respect to variations in rainfall and runoff due to rising temperature averages. Nowadays, while typical workstations have adequate power to run distributed routing hydrological models, it is still not enough for modeling on-the-fly, a crucial ability in a natural disaster context, where rapid decisions must be made. Semi-distributed time travel models, which compute a watershed's hydrograph without explicitly solving the full shallow water equations, appear as an attractive approach to rainfall-runoff modeling since, like fully distributed models, also superimpose a grid on the watershed, and compute runoff based on cell parameter values. These models are heavily dependent on the travel time expression for an individual cell. Many models make use of expressions based on kinematic wave theory, which is not applicable in cases where watershed storage is important, such as mild slopes. This work presents a new expression for concentration times in overland flow, based on diffusive wave theory, which considers not only the effects of storage but also the effects on upstream contribution. Setting upstream contribution equal to zero gives an expression consistent with previous work on diffusive wave theory; on the other hand, neglecting storage effects (i.e.: diffusion,) is shown to be equivalent to kinematic wave theory, currently used in many spatially distributed time travel models. The newly found expression is shown to be dependent on plane discretization, particularly when dealing with very non-kinematic cases. This is shown to be the result of upstream contribution, which gets larger downstream, versus plane length. This result also provides some light on the limits on applicability of the expression: when a certain kinematic threshold is reached, the expression is no longer valid, and one must fall back to kinematic wave theory, for lack of a better option. This expression could be used for improving currently published spatially distributed time travel models, since they would become applicable in many new cases.

  12. Diagnosis of delay-deadline failures in real time discrete event models.

    PubMed

    Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha

    2007-10-01

    In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.

  13. Predicting Ambulance Time of Arrival to the Emergency Department Using Global Positioning System and Google Maps

    PubMed Central

    Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig

    2014-01-01

    Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736

  14. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Youngsoo; Carlberg, Kevin Thomas

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less

  15. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  16. IRI STORM validation over Europe

    NASA Astrophysics Data System (ADS)

    Haralambous, Haris; Vryonides, Photos; Demetrescu, Crişan; Dobrică, Venera; Maris, Georgeta; Ionescu, Diana

    2014-05-01

    The International Reference Ionosphere (IRI) model includes an empirical Storm-Time Ionospheric Correction Model (STORM) extension to account for storm-time changes of the F layer peak electron density (NmF2) during increased geomagnetic activity. This model extension is driven by past history values of the geomagnetic index ap (The magnetic index applied is the integral of ap over the previous 33 hours with a weighting function deduced from physically based modeling) and it adjusts the quiet-time F layer peak electron density (NmF2) to account for storm-time changes in the ionosphere. In this investigation manually scaled hourly values of NmF2 measured during the main and recovery phases of selected storms for the maximum solar activity period of the current solar cycle are compared with the predicted IRI-2012 NmF2 over European ionospheric stations using the STORM model option. Based on the comparison a subsequent performance evaluation of the STORM option during this period is quantified.

  17. Evaluation of indoor air composition time variation in air-tight occupied spaces during night periods

    NASA Astrophysics Data System (ADS)

    Markov, Detelin

    2012-11-01

    This paper presents an easy-to-understand procedure for prediction of indoor air composition time variation in air-tight occupied spaces during the night periods. The mathematical model is based on the assumptions for homogeneity and perfect mixing of the indoor air, the ideal gas model for non-reacting gas mixtures, mass conservation equations for the entire system and for each species, a model for prediction of basal metabolic rate of humans as well as a model for prediction of O2 consumption rate and both CO2 and H2O generation rates by breathing. Time variation of indoor air composition is predicted at constant indoor air temperature for three scenarios based on the analytical solution of the mathematical model. The results achieved reveal both the most probable scenario for indoor air time variation in air-tight occupied spaces as well as the cause for morning tiredness after having a sleep in a modern energy efficient space.

  18. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    PubMed

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  19. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  20. Skill of Ensemble Seasonal Probability Forecasts

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk

    2010-05-01

    In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.

  1. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  2. Work-family conflict, emotional exhaustion and performance-based self-esteem: reciprocal relationships.

    PubMed

    Richter, Anne; Schraml, Karin; Leineweber, Constanze

    2015-01-01

    The three constructs of work-family conflict, emotional exhaustion and performance-based self-esteem are all related to tremendous negative consequences for the individual, the organization as well as for society. Even though there are studies that connect two of those constructs, the prospective relations between all three of them have not been studied yet. We explored the prospective relations between the three constructs in a large Swedish data set representative of the Swedish workforce. Gender differences in the relations were investigated. Longitudinal data with a 2-year time lag were gathered from 3,387 working men and women who responded to the 2006 and 2008 waves of the Swedish Longitudinal Occupational Survey of Health. Four different cross-lagged models were analysed. In the best fitting model, higher levels of work-family conflict at time 1 were associated with an increased level of performance-based self-esteem at time 2, but not with emotional exhaustion, after controlling for having children, gender, education and age. Also, relationships between emotional exhaustion at time 1 and work-family conflict and performance-based self-esteem at time 2 could be established. Furthermore, relationships between performance-based self-esteem time 1 and work-family conflict and emotional exhaustion time 2 were found. Multiple-group analysis did not show any differences in the relations of the tested constructs over time for either men or women. We conclude that the three constructs are interrelated and best understood through a reciprocal model. No differences were found between men and women.

  3. Clock Synchronization Through Time-Variant Underwater Acoustic Channels

    DTIC Science & Technology

    2012-09-01

    stage, we analyze a series of chirp responses to identify the least time -varying multipath present in the channel between the two nodes. Based on the... based on the detected arrivals and determines the most stable one based on the correlation coefficient of a model fit to the time -of-arrival estimates...short periods of time . Nevertheless, signal fluctuations can occur due to transceiver motion or inherent changes within the propagation medium

  4. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    NASA Astrophysics Data System (ADS)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays important roles. This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). Figure 1: (a) AEM system over a 3D earth model; (b) magnetic field Bz; (c) magnetic induction dBz/dt.

  5. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  7. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  8. Simulation-based decision support framework for dynamic ambulance redeployment in Singapore.

    PubMed

    Lam, Sean Shao Wei; Ng, Clarence Boon Liang; Nguyen, Francis Ngoc Hoang Long; Ng, Yih Yng; Ong, Marcus Eng Hock

    2017-10-01

    Dynamic ambulance redeployment policies tend to introduce much more flexibilities in improving ambulance resource allocation by capitalizing on the definite geospatial-temporal variations in ambulance demand patterns over the time-of-the-day and day-of-the-week effects. A novel modelling framework based on the Approximate Dynamic Programming (ADP) approach leveraging on a Discrete Events Simulation (DES) model for dynamic ambulance redeployment in Singapore is proposed in this paper. The study was based on the Singapore's national Emergency Medical Services (EMS) system. Based on a dataset comprising 216,973 valid incidents over a continuous two-years study period from 1 January 2011-31 December 2012, a DES model for the EMS system was developed. An ADP model based on linear value function approximations was then evaluated using the DES model via the temporal difference (TD) learning family of algorithms. The objective of the ADP model is to derive approximate optimal dynamic redeployment policies based on the primary outcome of ambulance coverage. Considering an 8min response time threshold, an estimated 5% reduction in the proportion of calls that cannot be reached within the threshold (equivalent to approximately 8000 dispatches) was observed from the computational experiments. The study also revealed that the redeployment policies which are restricted within the same operational division could potentially result in a more promising response time performance. Furthermore, the best policy involved the combination of redeploying ambulances whenever they are released from service and that of relocating ambulances that are idle in bases. This study demonstrated the successful application of an approximate modelling framework based on ADP that leverages upon a detailed DES model of the Singapore's EMS system to generate approximate optimal dynamic redeployment plans. Various policies and scenarios relevant to the Singapore EMS system were evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Precipitation extremes on multiple timescales - Bartlett-Lewis rectangular pulse model and intensity-duration-frequency curves

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.

    2017-12-01

    For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.

  10. Real-Time Global Flood Estimation Using Satellite-Based Precipitation and a Coupled Land Surface and Routing Model

    NASA Technical Reports Server (NTRS)

    Wu, Huan; Adler, Robert F.; Tian, Yudong; Huffman, George J.; Li, Hongyi; Wang, JianJian

    2014-01-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50 deg. N - 50 deg. S at relatively high spatial (approximately 12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS, the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is approximately 0.9 and the false alarm ratio is approximately 0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30 deg. S - 30 deg. N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. There were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.

  11. Real-time global flood estimation using satellite-based precipitation and a coupled land surface and routing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Huan; Adler, Robert F.; Tian, Yudong

    2014-03-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50°N–50°S at relatively high spatial (~12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS,more » the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is ~0.9 and the false alarm ratio is ~0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30°S–30°N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. Finally, there were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.« less

  12. Shilling attack detection for recommender systems based on credibility of group users and rating time series.

    PubMed

    Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.

  13. Shilling attack detection for recommender systems based on credibility of group users and rating time series

    PubMed Central

    Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user’s credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method. PMID:29742134

  14. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  15. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE PAGES

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    2018-03-01

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  16. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  17. A Memory-Based Model of Hick's Law

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Anderson, John R.

    2011-01-01

    We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…

  18. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  19. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  20. Multi-model blending

    DOEpatents

    Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran

    2016-10-18

    A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.

  1. A Rescorla-Wagner drift-diffusion model of conditioning and timing

    PubMed Central

    Alonso, Eduardo

    2017-01-01

    Computational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used. PMID:29095819

  2. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  3. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  4. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how GCR event rates mapped to biological signaling induction and relaxation times. We considered several hypotheses related to signaling and cancer risk, and then performed simulations for conditions where aberrant or adaptive signaling would occur on long-duration space mission. Our results do not support the conventional assumptions of dose, linearity and additivity. A discussion on how event-based systems biology models, which focus on biological signaling as the mechanism to propagate damage or adaptation, can be further developed for cancer and CNS space radiation risk projections is given.

  5. Experience of Time Passage:. Phenomenology, Psychophysics, and Biophysical Modelling

    NASA Astrophysics Data System (ADS)

    Wackermann, Jiří

    2005-10-01

    The experience of time's passing appears, from the 1st person perspective, to be a primordial subjective experience, seemingly inaccessible to the 3rd person accounts of time perception (psychophysics, cognitive psychology). In our analysis of the `dual klepsydra' model of reproduction of temporal durations, time passage occurs as a cognitive construct, based upon more elementary (`proto-cognitive') function of the psychophysical organism. This conclusion contradicts the common concepts of `subjective' or `psychological' time as readings of an `internal clock'. Our study shows how phenomenological, experimental and modelling approaches can be fruitfully combined.

  6. V/STOL tilt rotor aircraft study mathematical model for a real time simulation of a tilt rotor aircraft (Boeing Vertol Model 222), volume 8

    NASA Technical Reports Server (NTRS)

    Rosenstein, H.; Mcveigh, M. A.; Mollenkof, P. A.

    1973-01-01

    A mathematical model for a real time simulation of a tilt rotor aircraft was developed. The mathematical model is used for evaluating aircraft performance and handling qualities. The model is based on an eleven degree of freedom total force representation. The rotor is treated as a point source of forces and moments with appropriate response time lags and actuator dynamics. The aerodynamics of the wing, tail, rotors, landing gear, and fuselage are included.

  7. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  8. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  9. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  10. Data-adaptive Harmonic Decomposition and Real-time Prediction of Arctic Sea Ice Extent

    NASA Astrophysics Data System (ADS)

    Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael

    2017-04-01

    Decline in the Arctic sea ice extent (SIE) has profound socio-economic implications and is a focus of active scientific research. Of particular interest is prediction of SIE on subseasonal time scales, i.e. from early summer into fall, when sea ice coverage in Arctic reaches its minimum. However, subseasonal forecasting of SIE is very challenging due to the high variability of ocean and atmosphere over Arctic in summer, as well as shortness of observational data and inadequacies of the physics-based models to simulate sea-ice dynamics. The Sea Ice Outlook (SIO) by Sea Ice Prediction Network (SIPN, http://www.arcus.org/sipn) is a collaborative effort to facilitate and improve subseasonal prediction of September SIE by physics-based and data-driven statistical models. Data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) techniques [Chekroun and Kondrashov, 2017], have been successfully applied to the nonlinear stochastic modeling, as well as retrospective and real-time forecasting of Multisensor Analyzed Sea Ice Extent (MASIE) dataset in key four Arctic regions. In particular, DAH-MSLM predictions outperformed most statistical models and physics-based models in real-time 2016 SIO submissions. The key success factors are associated with DAH ability to disentangle complex regional dynamics of MASIE by data-adaptive harmonic spatio-temporal patterns that reduce the data-driven modeling effort to elemental MSLMs stacked per frequency with fixed and small number of model coefficients to estimate.

  11. Calculation of broadband time histories of ground motion, Part II: Kinematic and dynamic modeling using theoretical Green's functions and comparison with the 1994 northridge earthquake

    USGS Publications Warehouse

    Hartzell, S.; Guatteri, Mariagiovanna; Mai, P.M.; Liu, P.-C.; Fisk, M. R.

    2005-01-01

    In the evolution of methods for calculating synthetic time histories of ground motion for postulated earthquakes, kinematic source models have dominated to date because of their ease of application. Dynamic models, however, which incorporate a physical relationship between important faulting parameters of stress drop, slip, rupture velocity, and rise time, are becoming more accessible. This article compares a class of kinematic models based on the summation of a fractal distribution of subevent sizes with a dynamic model based on the slip-weakening friction law. Kinematic modeling is done for the frequency band 0.2 to 10.0. Hz, dynamic models are calculated from 0.2 to 2.0. Hz. The strong motion data set for the 1994 Northridge earthquake is used to evaluate and compare the synthetic time histories. Source models are propagated to the far field by convolution with 1D and 3D theoretical Green’s functions. In addition, the kinematic model is used to evaluate the importance of propagation path effects: velocity structure, scattering, and nonlinearity. At present, the kinematic model gives a better broadband fit to the Northridge ground motion than the simple slip-weakening dynamic model. In general, the dynamic model overpredicts rise times and produces insufficient shorter-period energy. Within the context of the slip-weakening model, the Northridge ground motion requires a short slip-weakening distance, on the order of 0.15 m or less. A more complex dynamic model including rate weakening or one that allows shorter rise times near the hypocenter may fit the data better.

  12. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    NASA Astrophysics Data System (ADS)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  13. Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.

    2006-01-01

    In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal

  14. Probabilistic Evaluation of Competing Climate Models

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Chatterjee, S.; Heyman, M.; Cressie, N.

    2017-12-01

    A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. Here, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. We compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set, as an illustration.

  15. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  16. Moving alcohol prevention research forward-Part II: new directions grounded in community-based system dynamics modeling.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.

  17. AR(p) -based detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Alvarez-Ramirez, J.; Rodriguez, E.

    2018-07-01

    Autoregressive models are commonly used for modeling time-series from nature, economics and finance. This work explored simple autoregressive AR(p) models to remove long-term trends in detrended fluctuation analysis (DFA). Crude oil prices and bitcoin exchange rate were considered, with the former corresponding to a mature market and the latter to an emergent market. Results showed that AR(p) -based DFA performs similar to traditional DFA. However, the former DFA provides information on stability of long-term trends, which is valuable for understanding and quantifying the dynamics of complex time series from financial systems.

  18. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  19. GPS-based Microenvironment Tracker (MicroTrac) Model to Estimate Time-Location of Individuals for Air Pollution Exposure Assessments: Model Evaluation in Central North Carolina

    EPA Science Inventory

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...

  20. "It's about Improving My Practice": The Learner Experience of Real-Time Coaching

    ERIC Educational Resources Information Center

    Sharplin, Erica J.; Stahl, Garth; Kehrwald, Ben

    2016-01-01

    This article reports on pre-service teachers' experience of the Real-Time Coaching model, an innovative technology-based approach to teacher training. The Real-Time Coaching model uses multiple feedback cycles via wireless technology to develop within pre-service teachers the specific skills and mindset toward continual improvement. Results of…

  1. Pointillist, Cyclical, and Overlapping: Multidimensional Facets of Time in Online Learning

    ERIC Educational Resources Information Center

    Ihanainen, Pekka; Moravec, John W.

    2011-01-01

    A linear, sequential time conception based on in-person meetings and pedagogical activities is not enough for those who practice and hope to enhance contemporary education, particularly where online interactions are concerned. In this article, we propose a new model for understanding time in pedagogical contexts. Conceptual parts of the model will…

  2. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  3. Reply to Comment by Roques et al. on "Base Flow Recession from Unsaturated-Saturated Porous Media considering Lateral Unsaturated Discharge and Aquifer Compressibility"

    NASA Astrophysics Data System (ADS)

    Liang, Xiuyu; Zhan, Hongbin; Zhang, You-Kuan; Schilling, Keith

    2018-04-01

    Roques et al. (https://doi.org/10.1002/2017WR022085) claims that they have proposed an exponential time step (ETS) method to improve the computing method of Liang et al. (https://doi.org/10.1002/2017WR020938) which used a constant time step (CTS) method on the derivative for dQ/dt in field data, where Q is the base flow discharge and t is the time since the start of base flow recession. This reply emphasizes that the main objective of Liang et al. (https://doi.org/10.1002/2017WR020938) was to develop an analytical model to investigate the effects of the unsaturated flow on base flow recession, not on the data interpretation methods. The analytical model indicates that the base flow recession hydrograph behaves as dQ/dt ˜aQb with the exponent b close to 1 at late times, which is consistent with previous theoretical models. The model of Liang et al. (https://doi.org/10.1002/2017WR020938) was applied to field data where the derivative of dQ/dt was computed using the CTS method, a method that has been widely adopted in previous studies. The ETS method proposed by Roques et al. (https://doi.org/10.1016/j.advwatres.2017.07.013) appears to be a good alternative but its accuracy needs further validation. Using slopes to fit field data as proposed by Roques et al. (https://doi.org/10.1002/2017WR022085) appears to match data satisfactorily at early times whereas it performs less satisfactorily at late times and leads to the exponent b being obviously larger than 1.

  4. BIOREACTOR ECONOMICS, SIZE AND TIME OF OPERATION (BEST) COMPUTER SIMULATOR FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS

    EPA Science Inventory

    BEST (bioreactor economics, size and time of operation) is an Excel™ spreadsheet-based model that is used in conjunction with the public domain geochemical modeling software, PHREEQCI. The BEST model is used in the design process of sulfate-reducing bacteria (SRB) field bioreacto...

  5. An optimization-based approach for solving a time-harmonic multiphysical wave problem with higher-order schemes

    NASA Astrophysics Data System (ADS)

    Mönkölä, Sanna

    2013-06-01

    This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.

  6. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  7. Using a physically-based transit time distribution function to estimate the hydraulic parameters and hydraulic transit times of an unconfined aquifer from tritium measurements

    NASA Astrophysics Data System (ADS)

    Farlin, Julien; Maloszewski, Piotr; Schneider, Wilfried; Gallé, Tom

    2014-05-01

    Groundwater transit time is of interest in environmental studies pertaining to the transport of pollutants from its source to the aquifer outlet (spring or pumping well) or to an observation well. Different models have been proposed to describe the distribution of transit times within groundwatersheds, the most common being the dispersion model, the exponential-piston-flow model (EPM) both proposed by Maloszewski and Zuber (Maloszewski and Zuber, 1982) and the (two or three parameter) gamma model (Amin and Campana, 1996; Kirchner et al., 1999). Choosing which function applies best is a recurrent and controversial problem in hydrogeology. The object of this study is to revisit the applicability of the EPM for unconfined aquifers, and to introduce an alternative model based explicitly on groundwater hydraulics. The alternative model is based on the transit time of water from any point at the groundwater table to the aquifer outlet, and is used to calculate inversely the hydraulic parameters of a fractured unconfined sandstone aquifer from tritium measurements made in a series of contact springs. This model is compared to the EPM, which is usually adopted to describe the transit time distribution of confined and unconfined aquifers alike. Both models are tested against observations, and it is shown that the EPM fails the test for some of the springs, and generally seems to overestimate the older water component. Amin, I. E., and M. E. Campana (1996), A general lumped parameter model for the interpretation of tracer data and transit time calculation in hydrologic systems, Journal of Hydrology, 179, 1-21, doi: 10.1016/0022-1694(95)02880-3. Kirchner, J. W., X. H. Feng, and C. Neal (1999), Fractal stream chemistry and its implications for contaminant transport in catchments, Nature physics, 403, 524-527, doi: 10.1038/35000537. Maloszewski, P., and A. Zuber (1982), Determining the turnover time of groundwater systems with the aid of environmental tracers, Journal of Hydrology, 57, 207-231, doi: 10.1016/0022-1694(82)90147-0.

  8. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  9. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  10. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  11. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    PubMed

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  12. An enhanced lumped element electrical model of a double barrier memristive device

    NASA Astrophysics Data System (ADS)

    Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz

    2017-05-01

    The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.

  13. Application of particle and lattice codes to simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Damjanac, Branko; Detournay, Christine; Cundall, Peter A.

    2016-04-01

    With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.

  14. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  15. Torsional Dynamics of Steerable Needles: Modeling and Fluoroscopic Guidance

    PubMed Central

    Swensen, John P.; Lin, MingDe; Okamura, Allison M.; Cowan, Noah J.

    2017-01-01

    Needle insertions underlie a diversity of medical interventions. Steerable needles provide a means by which to enhance existing needle-based interventions and facilitate new ones. Tip-steerable needles follow a curved path and can be steered by twisting the needle base during insertion, but this twisting excites torsional dynamics that introduce a discrepancy between the base and tip twist angles. Here, we model the torsional dynamics of a flexible rod—such as a tip-steerable needle—during subsurface insertion and develop a new controller based on the model. The torsional model incorporates time-varying mode shapes to capture the changing boundary conditions inherent during insertion. Numerical simulations and physical experiments using two distinct setups—stereo camera feedback in semi-transparent artificial tissue and feedback control with real-time X-ray imaging in optically opaque artificial tissue— demonstrate the need to account for torsional dynamics in control of the needle tip. PMID:24860026

  16. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  17. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  18. The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process

    NASA Astrophysics Data System (ADS)

    Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko

    2012-06-01

    A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.

  19. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  20. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  1. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  2. Dynamic and Thermal Turbulent Time Scale Modelling for Homogeneous Shear Flows

    NASA Technical Reports Server (NTRS)

    Schwab, John R.; Lakshminarayana, Budugur

    1994-01-01

    A new turbulence model, based upon dynamic and thermal turbulent time scale transport equations, is developed and applied to homogeneous shear flows with constant velocity and temperature gradients. The new model comprises transport equations for k, the turbulent kinetic energy; tau, the dynamic time scale; k(sub theta), the fluctuating temperature variance; and tau(sub theta), the thermal time scale. It offers conceptually parallel modeling of the dynamic and thermal turbulence at the two equation level, and eliminates the customary prescription of an empirical turbulent Prandtl number, Pr(sub t), thus permitting a more generalized prediction capability for turbulent heat transfer in complex flows and geometries. The new model also incorporates constitutive relations, based upon invariant theory, that allow the effects of nonequilibrium to modify the primary coefficients for the turbulent shear stress and heat flux. Predictions of the new model, along with those from two other similar models, are compared with experimental data for decaying homogeneous dynamic and thermal turbulence, homogeneous turbulence with constant temperature gradient, and homogeneous turbulence with constant temperature gradient and constant velocity gradient. The new model offers improvement in agreement with the data for most cases considered in this work, although it was no better than the other models for several cases where all the models performed poorly.

  3. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  4. Solar Atmosphere to Earth's Surface: Long Lead Time dB/dt Predictions with the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.

    2017-12-01

    The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.

  5. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  6. Predicting Exposure to Consumer-Products Using Agent-Based Models Embedded with Needs-Based Artificial Intelligence and Empirically -Based Scheduling Models

    EPA Science Inventory

    Information on human behavior and consumer product use is important for characterizing exposures to chemicals in consumer products and in indoor environments. Traditionally, exposure-assessors have relied on time-use surveys to obtain information on exposure-related behavior. In ...

  7. Study on residual discharge time of lead-acid battery based on fitting method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Wangwang; Jin, Yueqiang; Wang, Shuying

    2017-05-01

    This paper use the method of fitting to discuss the data of C problem of mathematical modeling in 2016, the residual discharge time model of lead-acid battery with 20A,30A,…,100A constant current discharge is obtained, and the discharge time model of discharge under arbitrary constant current is presented. The mean relative error of the model is calculated to be about 3%, which shows that the model has high accuracy. This model can provide a basis for optimizing the adaptation of power system to the electrical motor vehicle.

  8. Development of a physiologically based pharmacokinetic model for bisphenol A in pregnant mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawamoto, Yuko; Matsuyama, Wakoto; Wada, Masahiro

    Bisphenol A (BPA) is a weakly estrogenic monomer used to produce polymers for food contact and other applications, so there is potential for oral exposure of humans to trace amounts via ingestion. To date, no physiologically based pharmacokinetic (PBPK) model has been located for BPA in pregnant mice with or without fetuses. An estimate by a mathematical model is essential since information on humans is difficult to obtain experimentally. The PBPK model was constructed based on the pharmacokinetic data of our experiment following single oral administration of BPA to pregnant mice. The risk assessment of bisphenol A (BPA) on themore » development of human offspring is an important issue. There have been limited data on the exposure level of human fetuses to BPA (e.g. BPA concentration in cord blood) and no information is available on the pharmacokinetics of BPA in humans with or without fetuses. In the present study, we developed a physiologically based pharmacokinetic (PBPK) model describing the pharmacokinetics of BPA in a pregnant mouse with the prospect of future extrapolation to humans. The PBPK model was constructed based on the pharmacokinetic data of an experiment we executed on pregnant mice following single oral administration of BPA. The model could describe the rapid transfer of BPA through the placenta to the fetus and the slow disappearance from fetuses. The simulated time courses after three-time repeated oral administrations of BPA by the constructed model fitted well with the experimental data, and the simulation for the 10 times lower dose was also consistent with the experiment. This suggested that the PBPK model for BPA in pregnant mice was successfully verified and is highly promising for extrapolation to humans who are expected to be exposed more chronically to lower doses.« less

  9. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  10. Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay

    NASA Astrophysics Data System (ADS)

    Novi W, Cascarilla; Lestari, Dwi

    2016-02-01

    This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.

  11. Tracking Expected Improvements of Decadal Prediction in Climate Services

    NASA Astrophysics Data System (ADS)

    Suckling, E.; Thompson, E.; Smith, L. A.

    2013-12-01

    Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.

  12. Ferroelectric Field Effect Transistor Model Using Partitioned Ferroelectric Layer and Partial Polarization

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat D.

    2004-01-01

    A model of an n-channel ferroelectric field effect transistor has been developed based on both theoretical and empirical data. The model is based on an existing model that incorporates partitioning of the ferroelectric layer to calculate the polarization within the ferroelectric material. The model incorporates several new aspects that are useful to the user. It takes into account the effect of a non-saturating gate voltage only partially polarizing the ferroelectric material based on the existing remnant polarization. The model also incorporates the decay of the remnant polarization based on the time history of the FFET. A gate pulse of a specific voltage; will not put the ferroelectric material into a single amount of polarization for that voltage, but instead vary with previous state of the material and the time since the last change to the gate voltage. The model also utilizes data from FFETs made from different types of ferroelectric materials to allow the user just to input the material being used and not recreate the entire model. The model also allows the user to input the quality of the ferroelectric material being used. The ferroelectric material quality can go from a theoretical perfect material with little loss and no decay to a less than perfect material with remnant losses and decay. This model is designed to be used by people who need to predict the external characteristics of a FFET before the time and expense of design and fabrication. It also allows the parametric evaluation of quality of the ferroelectric film on the overall performance of the transistor.

  13. Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates

    PubMed Central

    Chen, Jin; Venugopal, Vivek; Intes, Xavier

    2011-01-01

    Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610

  14. Comparing Stream DOC Fluxes from Sensor- and Sample-Based Approaches

    NASA Astrophysics Data System (ADS)

    Shanley, J. B.; Saraceno, J.; Aulenbach, B. T.; Mast, A.; Clow, D. W.; Hood, K.; Walker, J. F.; Murphy, S. F.; Torres-Sanchez, A.; Aiken, G.; McDowell, W. H.

    2015-12-01

    DOC transport by streamwater is a significant flux that does not consistently show up in ecosystem carbon budgets. In an effort to quantify stream DOC flux, we analyzed three to four years of high-frequency in situ fluorescing dissolved organic matter (FDOM) concentrations and turbidity measured by optical sensors at the five diverse forested and/or alpine headwater sites of the U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) program. FDOM serves as a proxy for DOC. We also took discrete samples over a range of hydrologic conditions, using both manual weekly and automated event-based sampling. After compensating FDOM for temperature effects and turbidity interference - which was successful even at the high-turbidity Luquillo, PR site -- we evaluated the DOC-FDOM relation based on discrete sample DOC analyses matched to corrected FDOM at the time of sampling. FDOM was a moderately robust predictor of DOC, with r2 from 0.60 to more than 0.95 among sites. We then formed continuous DOC time series by two independent approaches: (1) DOC predicted from FDOM; and (2) the composite method, based on modeled DOC from regression on stream discharge, season, air temperature, and time, forcing the model to observations and adjusting modeled concentrations between observations by linearly-interpolated model residuals. DOC flux from each approach was then computed directly as concentration times discharge. DOC fluxes based on the sensor approach were consistently greater than the sample-based approach. At Loch Vale, CO (2.5 years) and Panola Mountain GA (1 year), the difference was 5-17%. At Sleepers River, VT (3 years), preliminary differences were greater than 20%. The difference is driven by the highest events, but we are investigating these results further. We will also present comparisons from Luquillo, PR, and Allequash Creek, WI. The higher sensor-based DOC fluxes could result from their accuracy during hysteresis, which is difficult to model. In at least one case the higher sensor-based DOC flux was linked to an unsampled event outside the range of the concentration model. Sensors require upkeep and vigilance with the data, but have the potential to yield more accurate fluxes than sample-based approaches.

  15. A model for food and stimulus changes that signal time-based contingency changes.

    PubMed

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2014-11-01

    When the availability of reinforcers depends on time since an event, time functions as a discriminative stimulus. Behavioral control by elapsed time is generally weak, but may be enhanced by added stimuli that act as additional time markers. The present paper assessed the effect of brief and continuous added stimuli on control by time-based changes in the reinforcer differential, using a procedure in which the local reinforcer ratio reversed at a fixed time after the most recent reinforcer delivery. Local choice was enhanced by the presentation of the brief stimuli, even when the stimulus change signalled only elapsed time, but not the local reinforcer ratio. The effect of the brief stimulus presentations on choice decreased as a function of time since the most recent stimulus change. We compared the ability of several versions of a model of local choice to describe these data. The data were best described by a model which assumed that error in discriminating the local reinforcer ratio arose from imprecise discrimination of reinforcers in both time and space, suggesting that timing behavior is controlled not only by discrimination elapsed time, but by discrimination of the reinforcer differential in time. © Society for the Experimental Analysis of Behavior.

  16. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  17. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  18. Proper accounting for time increases crop-based biofuels' greenhouse gas deficit versus petroleum

    NASA Astrophysics Data System (ADS)

    O'Hare, M.; Plevin, R. J.; Martin, J. I.; Jones, A. D.; Kendall, A.; Hopson, E.

    2009-04-01

    The global warming intensities of crop-based biofuels and fossil fuels differ not only in amount but also in their discharge patterns over time. Early discharges, for example, from market-mediated land use change, will have created more global warming by any time in the future than later discharges, owing to the slow decay of atmospheric CO2. A spreadsheet model of this process, BTIME, captures this important time pattern effect using the Bern CO2 decay model to allow fuels to be compared for policy decisions on the basis of their real warming effects with a variety of user-supplied parameter values. The model also allows economic discounting of climate effects extended far into the future. Compared to approaches that simply sum greenhouse gas emissions over time, recognizing the physics of atmospheric CO2 decay significantly increases the deficit relative to fossil fuel of any biofuel causing land use change.

  19. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  20. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  1. On the real-time estimation of the wheel-rail contact force by means of a new nonlinear estimator design model

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2018-05-01

    The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.

  2. Advanced Intelligent System Application to Load Forecasting and Control for Hybrid Electric Bus

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Elfayoumy, Mahmoud

    1996-01-01

    The primary motivation for this research emanates from providing a decision support system to the electric bus operators in the municipal and urban localities which will guide the operators to maintain an optimal compromise among the noise level, pollution level, fuel usage etc. This study is backed up by our previous studies on study of battery characteristics, permanent magnet DC motor studies and electric traction motor size studies completed in the first year. The operator of the Hybrid Electric Car must determine optimal power management schedule to meet a given load demand for different weather and road conditions. The decision support system for the bus operator comprises three sub-tasks viz. forecast of the electrical load for the route to be traversed divided into specified time periods (few minutes); deriving an optimal 'plan' or 'preschedule' based on the load forecast for the entire time-horizon (i.e., for all time periods) ahead of time; and finally employing corrective control action to monitor and modify the optimal plan in real-time. A fully connected artificial neural network (ANN) model is developed for forecasting the kW requirement for hybrid electric bus based on inputs like climatic conditions, passenger load, road inclination, etc. The ANN model is trained using back-propagation algorithm employing improved optimization techniques like projected Lagrangian technique. The pre-scheduler is based on a Goal-Programming (GP) optimization model with noise, pollution and fuel usage as the three objectives. GP has the capability of analyzing the trade-off among the conflicting objectives and arriving at the optimal activity levels, e.g., throttle settings. The corrective control action or the third sub-task is formulated as an optimal control model with inputs from the real-time data base as well as the GP model to minimize the error (or deviation) from the optimal plan. These three activities linked with the ANN forecaster proving the output to the GP model which in turn produces the pre-schedule of the optimal control model. Some preliminary results based on a hypothetical test case will be presented for the load forecasting module. The computer codes for the three modules will be made available fe adoption by bus operating agencies. Sample results will be provided using these models. The software will be a useful tool for supporting the control systems for the Electric Bus project of NASA.

  3. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  4. Equivalent Discrete-Time Channel Modeling for Molecular Communication With Emphasize on an Absorbing Receiver.

    PubMed

    Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam

    2017-01-01

    This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.

  5. Incorporating time-delays in S-System model for reverse engineering genetic networks.

    PubMed

    Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan

    2013-06-18

    In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.

  6. Topographic evolution of orogens: The long term perspective

    NASA Astrophysics Data System (ADS)

    Robl, Jörg; Hergarten, Stefan; Prasicek, Günther

    2017-04-01

    The landscape of mountain ranges reflects the competition of tectonics and climate, that build up and destroy topography, respectively. While there is a broad consensus on the acting processes, there is a vital debate whether the topography of individual orogens reflects stages of growth, steady-state or decay. This debate is fuelled by the million-year time scales hampering direct observations on landscape evolution in mountain ranges, the superposition of various process patterns and the complex interactions among different processes. In this presentation we focus on orogen-scale landscape evolution based on time-dependent numerical models and explore model time series to constrain the development of mountain range topography during an orogenic cycle. The erosional long term response of rivers and hillslopes to uplift can be mathematically formalised by the stream power and mass diffusion equations, respectively, which enables us to describe the time-dependent evolution of topography in orogens. Based on a simple one-dimensional model consisting of two rivers separated by a watershed we explain the influence of uplift rate and rock erodibility on steady-state channel profiles and show the time-dependent development of the channel - drainage divide system. The effect of dynamic drainage network reorganization adds additional complexity and its effect on topography is explored on the basis of two-dimensional models. Further complexity is introduced by coupling a mechanical model (thin viscous sheet approach) describing continental collision, crustal thickening and topography formation with a stream power-based landscape evolution model. Model time series show the impact of crustal deformation on drainage networks and consequently on the evolution of mountain range topography (Robl et al., in review). All model outcomes, from simple one-dimensional to coupled two dimensional models are presented as movies featuring a high spatial and temporal resolution. Robl, J., S. Hergarten, and G. Prasicek (in review), The topographic state of mountain ranges, Earth Science Reviews.

  7. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  8. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  9. Model-driven development of covariances for spatiotemporal environmental health assessment.

    PubMed

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  10. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  11. A Tree Based Broadcast Scheme for (m, k)-firm Real-Time Stream in Wireless Sensor Networks

    PubMed Central

    Park, HoSung; Kim, Beom-Su; Kim, Kyong Hoon; Shah, Babar; Kim, Ki-Il

    2017-01-01

    Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new (m, k)-firm-based Real-time Broadcast Protocol (FRBP) by constructing a broadcast tree to satisfy the (m, k)-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured (m, k)-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption. PMID:29120404

  12. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  13. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  14. Interest-Driven Model for Human Dynamics

    NASA Astrophysics Data System (ADS)

    Shang, Ming-Sheng; Chen, Guan-Xiong; Dai, Shuang-Xing; Wang, Bing-Hong; Zhou, Tao

    2010-04-01

    Empirical observations indicate that the interevent time distribution of human actions exhibits heavy-tailed features. The queuing model based on task priorities is to some extent successful in explaining the origin of such heavy tails, however, it cannot explain all the temporal statistics of human behavior especially for the daily entertainments. We propose an interest-driven model, which can reproduce the power-law distribution of interevent time. The exponent can be analytically obtained and is in good accordance with the simulations. This model well explains the observed relationship between activities and power-law exponents, as reported recently for web-based behavior and the instant message communications.

  15. Modelling of Rainfall Induced Landslides in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.

    2010-12-01

    We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.

  16. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and ROC tests allow us to judge data completeness and estimate error. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges and pitfalls in serving up these datasets over the web.

  17. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification.

    PubMed

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.

  18. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification

    PubMed Central

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594

  19. Using Data Assimilation in Real-Time Hydrological Modeling of Groundwater and Stream Flow in Silkeborg, Denmark

    NASA Astrophysics Data System (ADS)

    He, X.; Kidmose, J.; Madsen, H.; Zheng, C.; Refsgaard, J. C.

    2017-12-01

    Climate adaptation strategies have nowadays been used more and more frequently in European cities, such as low impact development to increase infiltration and thus reduce the risk of urban flooding. An alternative approach to cope with the increased precipitation under the future climate condition is by using real-time management techniques to operate the drainage system. In the present study, we developed a real-time hydrological modeling system which can forecast both surface water and groundwater in the city of Silkeborg, Denmark. The model is based on MIKE SHE code, and operates on 50 × 50 m grid cell with hourly time step. Real-time observation data, i.e. groundwater head data from 35 wells and 4 stream flow gauging stations, are used in a data assimilation (DA) framework in order to correct bias in each calculation cell. The DA framework is based on ensemble Kalman filter (EnKF) where uncertainties from forcing data, model parameters as well as observations are taken into consideration. A case study has been carried out where the DA enabled MIKE SHE model was executed in conjunction with the rainfall products from the Danish Meteorological Institute: short term weather forecast coming from HIRLAM model with temporal resolution of 10 minutes and 8 hours lead time, and longer term forecast coming from HARMONIE model with temporal resolution of 1 hour and 48 hour lead time. The results show that DA can visibly increase the model performance for both groundwater head and stream discharge simulations. Even for the short period when observation data are not available (June 2016), the DA based model can still outperform the model without DA. In the forecasting mode, the simulated stream discharge is much more responsive to the increase of rainfall than groundwater as expected. The predicted and observed groundwater head in some areas only varies in the magnitude of a few centimeters, which does not have so much practical meaning in reality, whereas in other areas it could be as high as 1 m depending on the underlying geology.

  20. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  1. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  2. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  3. Laboratory-based versus non-laboratory-based method for assessment of cardiovascular disease risk: the NHANES I Follow-up Study cohort

    PubMed Central

    Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael

    2008-01-01

    Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687

  4. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  5. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Time-Based Readout of a Silicon Photomultiplier (SiPM) for Time of Flight Positron Emission Tomography (TOF-PET)

    NASA Astrophysics Data System (ADS)

    Powolny, F.; Auffray, E.; Brunner, S. E.; Garutti, E.; Goettlich, M.; Hillemanns, H.; Jarron, P.; Lecoq, P.; Meyer, T.; Schultz-Coulon, H. C.; Shen, W.; Williams, M. C. S.

    2011-06-01

    Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Ω input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 × 3 mm2 SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 × 3 × 15 mm3 LSO crystal coupled to a SiPM. The measured time coincidence precision and the limitations in the overall timing accuracy are interpreted using Monte Carlo/SPICE simulation, Poisson statistics, and geometric effects of the crystal.

  8. Vocational-Technical Education Reforms in Germany, Netherlands, France and U.K. and Their Implications to Taiwan.

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng

    Three major models of vocational education and training provision for the 16- to 19-year-old age group have been identified: schooling model, which emphasizes full-time schooling until age 18; dual model, which involves mainly work-based apprenticeship training with some school-based general education; and mixed model. Germany is an exemplar of…

  9. Building a mechanistic understanding of predation with GPS-based movement data.

    PubMed

    Merrill, Evelyn; Sand, Håkan; Zimmermann, Barbara; McPhee, Heather; Webb, Nathan; Hebblewhite, Mark; Wabakken, Petter; Frair, Jacqueline L

    2010-07-27

    Quantifying kill rates and sources of variation in kill rates remains an important challenge in linking predators to their prey. We address current approaches to using global positioning system (GPS)-based movement data for quantifying key predation components of large carnivores. We review approaches to identify kill sites from GPS movement data as a means to estimate kill rates and address advantages of using GPS-based data over past approaches. Despite considerable progress, modelling the probability that a cluster of GPS points is a kill site is no substitute for field visits, but can guide our field efforts. Once kill sites are identified, time spent at a kill site (handling time) and time between kills (killing time) can be determined. We show how statistical models can be used to investigate the influence of factors such as animal characteristics (e.g. age, sex, group size) and landscape features on either handling time or killing efficiency. If we know the prey densities along paths to a kill, we can quantify the 'attack success' parameter in functional response models directly. Problems remain in incorporating the behavioural complexity derived from GPS movement paths into functional response models, particularly in multi-prey systems, but we believe that exploring the details of GPS movement data has put us on the right path.

  10. Development of a human cadaver model for training in laparoscopic donor nephrectomy.

    PubMed

    Sutton, Erica R H; Billeter, Adrian; Druen, Devin; Roberts, Henry; Rice, Jonathan

    2017-06-01

    The organ procurement network recommends a surgeon record 15 cases as surgeon or assistant for laparoscopic donor nephrectomies (LDN) prior to independent practice. The literature suggests that the learning curve for improved perioperative and patient outcomes is closer to 35 cases. In this article, we describe our development of a model utilizing fresh tissue and objective, quantifiable endpoints to document surgical progress, and efficiency in each of the major steps involved in LDN. Phase I of model development focused on the modifications necessary to maintain visualization for laparoscopic surgery in a human cadaver. Phase II tested proposed learner-based metrics of procedural competency for multiport LDN by timing procedural steps of LDN in a novice learner. Phases I and II required 12 and nine cadavers, with a total of 35 kidneys utilized. The following metrics improved with trial number for multiport LDN: time taken for dissection of the gonadal vein, ureter, renal hilum, adrenal and lumbrical veins, simulated warm ischemic time (WIT), and operative time. Human cadavers can be used for training in LDN as evidenced by improvements in timed learner-based metrics. This simulation-based model fills a gap in available training options for surgeons. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. A New MRI-Based Model of Heart Function with Coupled Hemodynamics and Application to Normal and Diseased Canine Left Ventricles

    PubMed Central

    Choi, Young Joon; Constantino, Jason; Vedula, Vijay; Trayanova, Natalia; Mittal, Rajat

    2015-01-01

    A methodology for the simulation of heart function that combines an MRI-based model of cardiac electromechanics (CE) with a Navier–Stokes-based hemodynamics model is presented. The CE model consists of two coupled components that simulate the electrical and the mechanical functions of the heart. Accurate representations of ventricular geometry and fiber orientations are constructed from the structural magnetic resonance and the diffusion tensor MR images, respectively. The deformation of the ventricle obtained from the electromechanical model serves as input to the hemodynamics model in this one-way coupled approach via imposed kinematic wall velocity boundary conditions and at the same time, governs the blood flow into and out of the ventricular volume. The time-dependent endocardial surfaces are registered using a diffeomorphic mapping algorithm, while the intraventricular blood flow patterns are simulated using a sharp-interface immersed boundary method-based flow solver. The utility of the combined heart-function model is demonstrated by comparing the hemodynamic characteristics of a normal canine heart beating in sinus rhythm against that of the dyssynchronously beating failing heart. We also discuss the potential of coupled CE and hemodynamics models for various clinical applications. PMID:26442254

  12. Comparison of CME/Shock Propagation Models with Heliospheric Imaging and In Situ Observations

    NASA Astrophysics Data System (ADS)

    Zhao, Xinhua; Liu, Ying D.; Inhester, Bernd; Feng, Xueshang; Wiegelmann, Thomas; Lu, Lei

    2016-10-01

    The prediction of the arrival time for fast coronal mass ejections (CMEs) and their associated shocks is highly desirable in space weather studies. In this paper, we use two shock propagation models, I.e., Data Guided Shock Time Of Arrival (DGSTOA) and Data Guided Shock Propagation Model (DGSPM), to predict the kinematical evolution of interplanetary shocks associated with fast CMEs. DGSTOA is based on the similarity theory of shock waves in the solar wind reference frame, and DGSPM is based on the non-similarity theory in the stationary reference frame. The inputs are the kinematics of the CME front at the maximum speed moment obtained from the geometric triangulation method applied to STEREO imaging observations together with the Harmonic Mean approximation. The outputs provide the subsequent propagation of the associated shock. We apply these models to the CMEs on 2012 January 19, January 23, and March 7. We find that the shock models predict reasonably well the shock’s propagation after the impulsive acceleration. The shock’s arrival time and local propagation speed at Earth predicted by these models are consistent with in situ measurements of WIND. We also employ the Drag-Based Model (DBM) as a comparison, and find that it predicts a steeper deceleration than the shock models after the rapid deceleration phase. The predictions of DBM at 1 au agree with the following ICME or sheath structure, not the preceding shock. These results demonstrate the applicability of the shock models used here for future arrival time prediction of interplanetary shocks associated with fast CMEs.

  13. Utah State University Global Assimilation of Ionospheric Measurements Gauss-Markov Kalman filter model of the ionosphere: Model description and validation

    NASA Astrophysics Data System (ADS)

    Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Thompson, D. C.; Zhu, L.

    2006-11-01

    The Utah State University Gauss-Markov Kalman Filter (GMKF) was developed as part of the Global Assimilation of Ionospheric Measurements (GAIM) program. The GMKF uses a physics-based model of the ionosphere and a Gauss-Markov Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) observations. The physics-based model is the Ionospheric Forecast Model (IFM), which accounts for five ion species and covers the E region, F region, and the topside from 90 to 1400 km altitude. Within the GMKF, the IFM derived ionospheric densities constitute a background density field on which perturbations are superimposed based on the available data and their errors. In the current configuration, the GMKF assimilates slant total electron content (TEC) from a variable number of global positioning satellite (GPS) ground sites, bottomside electron density (Ne) profiles from a variable number of ionosondes, in situ Ne from four Defense Meteorological Satellite Program (DMSP) satellites, and nighttime line-of-sight ultraviolet (UV) radiances measured by satellites. To test the GMKF for real-time operations and to validate its ionospheric density specifications, we have tested the model performance for a variety of geophysical conditions. During these model runs various combination of data types and data quantities were assimilated. To simulate real-time operations, the model ran continuously and automatically and produced three-dimensional global electron density distributions in 15 min increments. In this paper we will describe the Gauss-Markov Kalman filter model and present results of our validation study, with an emphasis on comparisons with independent observations.

  14. Consistent View of Protein Fluctuations from All-Atom Molecular Dynamics and Coarse-Grained Dynamics with Knowledge-Based Force-Field.

    PubMed

    Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-01-08

    It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.

  15. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  16. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    PubMed

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  17. Application of satellite-based rainfall and medium range meteorological forecast in real-time flood forecasting in the Mahanadi River basin

    NASA Astrophysics Data System (ADS)

    Nanda, Trushnamayee; Beria, Harsh; Sahoo, Bhabagrahi; Chatterjee, Chandranath

    2016-04-01

    Increasing frequency of hydrologic extremes in a warming climate call for the development of reliable flood forecasting systems. The unavailability of meteorological parameters in real-time, especially in the developing parts of the world, makes it a challenging task to accurately predict flood, even at short lead times. The satellite-based Tropical Rainfall Measuring Mission (TRMM) provides an alternative to the real-time precipitation data scarcity. Moreover, rainfall forecasts by the numerical weather prediction models such as the medium term forecasts issued by the European Center for Medium range Weather Forecasts (ECMWF) are promising for multistep-ahead flow forecasts. We systematically evaluate these rainfall products over a large catchment in Eastern India (Mahanadi River basin). We found spatially coherent trends, with both the real-time TRMM rainfall and ECMWF rainfall forecast products overestimating low rainfall events and underestimating high rainfall events. However, no significant bias was found for the medium rainfall events. Another key finding was that these rainfall products captured the phase of the storms pretty well, but suffered from consistent under-prediction. The utility of the real-time TRMM and ECMWF forecast products are evaluated by rainfall-runoff modeling using different artificial neural network (ANN)-based models up to 3-days ahead. Keywords: TRMM; ECMWF; forecast; ANN; rainfall-runoff modeling

  18. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  19. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an

  20. New Insights to Compare and Choose TKTD Models for Survival Based on an Interlaboratory Study for Lymnaea stagnalis Exposed to Cd.

    PubMed

    Baudrot, Virgile; Preux, Sara; Ducrot, Virginie; Pave, Alain; Charles, Sandrine

    2018-02-06

    Toxicokinetic-toxicodynamic (TKTD) models, as the General Unified Threshold model of Survival (GUTS), provide a consistent process-based framework compared to classical dose-response models to analyze both time and concentration-dependent data sets. However, the extent to which GUTS models (Stochastic Death (SD) and Individual Tolerance (IT)) lead to a better fitting than classical dose-response model at a given target time (TT) has poorly been investigated. Our paper highlights that GUTS estimates are generally more conservative and have a reduced uncertainty through smaller credible intervals for the studied data sets than classical TT approaches. Also, GUTS models enable estimating any x% lethal concentration at any time (LC x,t ), and provide biological information on the internal processes occurring during the experiments. While both GUTS-SD and GUTS-IT models outcompete classical TT approaches, choosing one preferentially to the other is still challenging. Indeed, the estimates of survival rate over time and LC x,t are very close between both models, but our study also points out that the joint posterior distributions of SD model parameters are sometimes bimodal, while two parameters of the IT model seems strongly correlated. Therefore, the selection between these two models has to be supported by the experimental design and the biological objectives, and this paper provides some insights to drive this choice.

  1. A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge

    2016-12-01

    A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.

  2. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    PubMed

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  4. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  5. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  6. Time-Related Decay or Interference-Based Forgetting in Working Memory?

    ERIC Educational Resources Information Center

    Portrat, Sophie; Barrouillet, Pierre; Camos, Valerie

    2008-01-01

    The time-based resource-sharing model of working memory assumes that memory traces suffer from a time-related decay when attention is occupied by concurrent activities. Using complex continuous span tasks in which temporal parameters are carefully controlled, P. Barrouillet, S. Bernardin, S. Portrat, E. Vergauwe, & V. Camos (2007) recently…

  7. A closed-loop artificial pancreas using a proportional integral derivative with double phase lead controller based on a new nonlinear model of glucose metabolism.

    PubMed

    Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves

    2013-05-01

    Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose-insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70-180 mg/dl). Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. © 2013 Diabetes Technology Society.

  8. Real-time plasma control in a dual-frequency, confined plasma etcher

    NASA Astrophysics Data System (ADS)

    Milosavljević, V.; Ellingboe, A. R.; Gaman, C.; Ringwood, J. V.

    2008-04-01

    The physics issues of developing model-based control of plasma etching are presented. A novel methodology for incorporating real-time model-based control of plasma processing systems is developed. The methodology is developed for control of two dependent variables (ion flux and chemical densities) by two independent controls (27 MHz power and O2 flow). A phenomenological physics model of the nonlinear coupling between the independent controls and the dependent variables of the plasma is presented. By using a design of experiment, the functional dependencies of the response surface are determined. In conjunction with the physical model, the dependencies are used to deconvolve the sensor signals onto the control inputs, allowing compensation of the interaction between control paths. The compensated sensor signals and compensated set-points are then used as inputs to proportional-integral-derivative controllers to adjust radio frequency power and oxygen flow to yield the desired ion flux and chemical density. To illustrate the methodology, model-based real-time control is realized in a commercial semiconductor dielectric etch chamber. The two radio frequency symmetric diode operates with typical commercial fluorocarbon feed-gas mixtures (Ar/O2/C4F8). Key parameters for dielectric etching are known to include ion flux to the surface and surface flux of oxygen containing species. Control is demonstrated using diagnostics of electrode-surface ion current, and chemical densities of O, O2, and CO measured by optical emission spectrometry and/or mass spectrometry. Using our model-based real-time control, the set-point tracking accuracy to changes in chemical species density and ion flux is enhanced.

  9. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  10. Activity-based costing: a practical model for cost calculation in radiotherapy.

    PubMed

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  11. Prevalence and influences of preschoolers' sedentary behaviors in early learning centers: a cross-sectional study.

    PubMed

    Tucker, Patricia; Vanderloo, Leigh M; Burke, Shauna M; Irwin, Jennifer D; Johnson, Andrew M

    2015-09-18

    Recent research has highlighted the need for increased evidence regarding the sedentary activity levels of preschoolers. Given the large proportion of time this population spends in various early learning facilities, the exploration of sedentary behaviors within this particular environment should be a priority. The purpose of the study was two-fold: (1) to compare sedentary time of preschoolers in three different early learning environments (i.e., full-day kindergarten [FDK], center-, and home-based childcare); and (2) to assess which characteristics (i.e., staff behaviors, sedentary environment, fixed play environment, portable play environment, sedentary opportunities) of these early learning environments influence preschoolers' sedentary time. Data collection occurred between September 2011 and June 2012. Preschoolers' sedentary time was measured using Actical(™) accelerometers at a 15 s epoch. The Environment and Policy Assessment and Observation (EPAO) tool was used to assess the sedentary environment of participating early learning classrooms, and those subscales (n = 5) that were evidence-informed as potentially influencing sedentary time in early learning centers were explored in the current study. A linear mixed model ANCOVA was carried out to determine the differences in sedentary time based on type of early learning environment while direct entry regression analyses were performed to describe the relationships between sedentary time and the five sedentary-specific EPAO subscale. Preschoolers (n = 218) from 28 early learning programs (i.e., 8 FDK, 9 centre-, and 8 home-based childcare facilities) participated. Accelerometry data revealed that preschoolers attending centre-based childcare engaged in the highest rate of sedentary time (41.62 mins/hr, SD = 3.78) compared to preschoolers in home-based childcare (40.72 mins/hr, SD = 6.34) and FDK (39.68 mins/hr, SD = 3.43). The models for FDK, center-based childcare, and home-based childcare, comprised each of the five EPAO subscales accounted for 10.5%, 5.9%, and 40.78% of the variability in sedentary time, respectively. Only the models for FDK and home-based childcare were found to be statistically significant (p < .05). This is the first exploration of differences in sedentary time among preschoolers in different early learning arrangements. Findings highlight the substantial portion of the day preschoolers spend in sedentary pursuits, and subsequently, the ongoing need to reduce preschoolers' sedentary time in early learning programs, particularly among those attending centre-based childcare facilities.

  12. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    PubMed

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.

  13. Integrating Near-Real Time Hydrologic-Response Monitoring and Modeling for Improved Assessments of Slope Stability Along the Coastal Bluffs of the Puget Sound Rail Corridor, Washington State

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.

    2015-12-01

    Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.

  14. Global distribution of carbon turnover times in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carvalhais, Nuno; Forkel, Matthias; Khomik, Myroslava; Bellarby, Jessica; Jung, Martin; Migliavacca, Mirco; Mu, Mingquan; Saatchi, Sassan; Santoro, Maurizio; Thurner, Martin; Weber, Ulrich; Ahrens, Bernhard; Beer, Christian; Cescatti, Alessandro; Randerson, James T.; Reichstein, Markus

    2015-04-01

    The response of the carbon cycle in terrestrial ecosystems to climate variability remains one of the largest uncertainties affecting future projections of climate change. This feedback between the terrestrial carbon cycle and climate is partly determined by the response of carbon uptake and by changes in the residence time of carbon in land ecosystems, which depend on climate, soil, and vegetation type. Thus, it is of foremost importance to quantify the turnover times of carbon in terrestrial ecosystems and its spatial co-variability with climate. Here, we develop a global, spatially explicit and observation-based assessment of whole-ecosystem carbon turnover times (τ) to investigate its co-variation with climate at global scale. Assuming a balance between uptake (gross primary production, GPP) and emission fluxes, τ can be defined as the ratio between the total stock (C_total) and the output or input fluxes (GPP). The estimation of vegetation (C_veg) stocks relies on new remote sensing-based estimates from Saatchi et al (2011) and Thurner et al (2014), while soil carbon stocks (C_soil) are estimated based on state of the art global (Harmonized World Soil Database) and regional (Northern Circumpolar Soil Carbon Database) datasets. The uptake flux estimates are based on global observation-based fields of GPP (Jung et al., 2011). Globally, we find an overall mean global carbon turnover time of 23-4+7 years (95% confidence interval). A strong spatial variability globally is also observed, from shorter residence times in equatorial regions to longer periods at latitudes north of 75°N (mean τ of 15 and 255 years, respectively). The observed latitudinal pattern reflect the clear dependencies on temperature, showing increases from the equator to the poles, which is consistent with our current understanding of temperature controls on ecosystem dynamics. However, long turnover times are also observed in semi-arid and forest-herbaceous transition regions. Furthermore, based on a local correlation analysis, our results reveal a similarly strong association between τ and precipitation. A further analysis of carbon turnover times as simulated by state-of-the-art coupled climate carbon-cycle models from the CMIP5 experiments reveals wide variations between models and a tendency to underestimate the global τ by 36%. The latitudinal patterns correlate significantly with the observation-based patterns. However, the models show stronger associations between τ and temperature than the observation-based estimates. In general, the stronger relationship between τ and precipitation is not reproduced and the modeled turnover times are significantly faster in many semi-arid regions. Ultimately, these results suggest a strong role of the hydrological cycle in the carbon cycle-climate interactions, which is not currently reproduced by Earth system models.

  15. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009).

    PubMed

    Nishiura, Hiroshi

    2011-02-16

    Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.

  17. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  18. Creep fatigue life prediction for engine hot section materials (isotropic)

    NASA Technical Reports Server (NTRS)

    Moreno, Vito; Nissley, David; Lin, Li-Sen Jim

    1985-01-01

    The first two years of a two-phase program aimed at improving the high temperature crack initiation life prediction technology for gas turbine hot section components are discussed. In Phase 1 (baseline) effort, low cycle fatigue (LCF) models, using a data base generated for a cast nickel base gas turbine hot section alloy (B1900+Hf), were evaluated for their ability to predict the crack initiation life for relevant creep-fatigue loading conditions and to define data required for determination of model constants. The variables included strain range and rate, mean strain, strain hold times and temperature. None of the models predicted all of the life trends within reasonable data requirements. A Cycle Damage Accumulation (CDA) was therefore developed which follows an exhaustion of material ductility approach. Material ductility is estimated based on observed similarities of deformation structure between fatigue, tensile and creep tests. The cycle damage function is based on total strain range, maximum stress and stress amplitude and includes both time independent and time dependent components. The CDA model accurately predicts all of the trends in creep-fatigue life with loading conditions. In addition, all of the CDA model constants are determinable from rapid cycle, fully reversed fatigue tests and monotonic tensile and/or creep data.

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. The Effect of Seasonal and Long-Period Geopotential Variations on the GPS Orbits

    NASA Technical Reports Server (NTRS)

    Melachroinos, Stavros A.; Lemoine, Frank G.; Chinn, Douglas S.; Zelensky, Nikita P.; Nicholas, Joseph B.; Beckley, Brian D.

    2013-01-01

    We examine the impact of using seasonal and long-period time-variable gravity field (TVG) models on GPS orbit determination, through simulations from 1994 to 2012. The models of time-variable gravity that we test include the GRGS release RL02 GRACE-derived 10-day gravity field models up to degree and order 20 (grgs20x20), a 4 x 4 series of weekly coefficients using GGM03S as a base derived from SLR and DORIS tracking to 11 satellites (tvg4x4), and a harmonic fit to the above 4 x 4 SLR-DORIS time series (goco2s_fit2). These detailed models are compared to GPS orbit simulations using a reference model (stdtvg) based on the International Earth Rotation Service (IERS) and International GNSS Service (IGS) repro1 standards. We find that the new TVG modeling produces significant along, cross-track orbit differences as well as annual, semi-annual, draconitic and long-period effects in the Helmert translation parameters (Tx, Ty, Tz) of the GPS orbits with magnitudes of several mm. We show that the simplistic TVG modeling approach used by all of the IGS Analysis Centers, which is based on the models provided by the IERS standards, becomes progressively less adequate following 2006 when compared to the seasonal and long-period TVG models.

Top