2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Normal response function method for mass and stiffness matrix updating using complex FRFs
NASA Astrophysics Data System (ADS)
Pradhan, S.; Modak, S. V.
2012-10-01
Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.
Utilizing Flight Data to Update Aeroelastic Stability Estimates
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
Capital update factor: a new era approaches.
Grimaldi, P L
1993-02-01
The Health Care Financing Administration (HCFA) has constructed a preliminary model of a new capital update method which is consistent with the framework being developed to refine the update method for PPS operating costs. HCFA's eventual goal is to develop a single update framework for operating and capital costs. Initial results suggest that adopting the new capital update method would reduce capital payments substantially, which might intensify creditor's concerns about extending loans to hospitals.
Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D
2013-07-01
To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.
2013-01-01
Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
Particle Filtering Methods for Incorporating Intelligence Updates
2017-03-01
methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Li, Ke; Deb, Kalyanmoy; Zhang, Qingfu; Zhang, Qiang
2017-09-01
Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
Qin, Lei; Snoussi, Hichem; Abdallah, Fahed
2014-01-01
We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883
Aircraft engine sensor fault diagnostics using an on-line OBEM update method.
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.
Aircraft engine sensor fault diagnostics using an on-line OBEM update method
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault. PMID:28182692
Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model
NASA Technical Reports Server (NTRS)
Boone, Spencer
2017-01-01
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei
2017-01-01
This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Update to the USDA-ARS fixed-wing spray nozzle models
USDA-ARS?s Scientific Manuscript database
The current USDA ARS Aerial Spray Nozzle Models were updated to reflect both new standardized measurement methods and systems, as well as, to increase operational spray pressure, aircraft airspeed and nozzle orientation angle limits. The new models were developed using both Central Composite Design...
Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng
2017-01-01
A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model. PMID:28753912
Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng
2017-07-19
A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model.
Application of Artificial Intelligence for Bridge Deterioration Model.
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Application of Artificial Intelligence for Bridge Deterioration Model
Chen, Zhang; Wu, Yangyang; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Ho-Ling; Davis, Stacy Cagle
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-01-01
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Real Time Updating Genetic Network Programming for Adapting to the Change of Stock Prices
NASA Astrophysics Data System (ADS)
Chen, Yan; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro
The key in stock trading model is to take the right actions for trading at the right time, primarily based on the accurate forecast of future stock trends. Since an effective trading with given information of stock prices needs an intelligent strategy for the decision making, we applied Genetic Network Programming (GNP) to creating a stock trading model. In this paper, we propose a new method called Real Time Updating Genetic Network Programming (RTU-GNP) for adapting to the change of stock prices. There are three important points in this paper: First, the RTU-GNP method makes a stock trading decision considering both the recommendable information of technical indices and the candlestick charts according to the real time stock prices. Second, we combine RTU-GNP with a Sarsa learning algorithm to create the programs efficiently. Also, sub-nodes are introduced in each judgment and processing node to determine appropriate actions (buying/selling) and to select appropriate stock price information depending on the situation. Third, a Real Time Updating system has been firstly introduced in our paper considering the change of the trend of stock prices. The experimental results on the Japanese stock market show that the trading model with the proposed RTU-GNP method outperforms other models without real time updating. We also compared the experimental results using the proposed method with Buy&Hold method to confirm its effectiveness, and it is clarified that the proposed trading model can obtain much higher profits than Buy&Hold method.
Update to core reporting practices in structural equation modeling.
Schreiber, James B
This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
Nonadditive entropy maximization is inconsistent with Bayesian updating.
Pressé, Steve
2014-11-01
The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
XFEM-based modeling of successive resections for preoperative image updating
NASA Astrophysics Data System (ADS)
Vigneron, Lara M.; Robe, Pierre A.; Warfield, Simon K.; Verly, Jacques G.
2006-03-01
We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.
NASA Astrophysics Data System (ADS)
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.
NASA Astrophysics Data System (ADS)
Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.
2017-12-01
This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.
2018-04-01
In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.
Overview and Evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In late 2016 or early 2017, CMAQ version 5.2 will be released. This new version of CMAQ will contain important updates from the current CMAQv5.1 modeling system, along with several instrumented versions of the model (e.g. decoupled direct method and sulfur tracking). Some specific model updates include the implementation of a new wind-blown dust treatment in CMAQv5.2, a significant improvement over the treatment in v5.1 which can severely overestimate wind-blown dust under certain conditions. Several other major updates to the modeling system include an update to the calculation of aerosols; implementation of full halogen chemistry (CMAQv5.1 contains a partial implementation of halogen chemistry); the new carbon bond 6 (CB6) chemical mechanism; updates to cloud model in CMAQ; and a new lightning assimilation scheme for the WRF model which significant improves the placement and timing of convective precipitation in the WRF precipitation fields. Numerous other updates to the modeling system will also be available in v5.2.
Dynamic updating atlas for heart segmentation with a nonlinear field-based model.
Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng
2017-09-01
Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Model updating in flexible-link multibody systems
NASA Astrophysics Data System (ADS)
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
An improved design method of a tuned mass damper for an in-service footbridge
NASA Astrophysics Data System (ADS)
Shi, Weixing; Wang, Liangkun; Lu, Zheng
2018-03-01
Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C
2014-01-01
Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285
Shiraishi, Emi; Maeda, Kazuhiro; Kurata, Hiroyuki
2009-02-01
Numerical simulation of differential equation systems plays a major role in the understanding of how metabolic network models generate particular cellular functions. On the other hand, the classical and technical problems for stiff differential equations still remain to be solved, while many elegant algorithms have been presented. To relax the stiffness problem, we propose new practical methods: the gradual update of differential-algebraic equations based on gradual application of the steady-state approximation to stiff differential equations, and the gradual update of the initial values in differential-algebraic equations. These empirical methods show a high efficiency for simulating the steady-state solutions for the stiff differential equations that existing solvers alone cannot solve. They are effective in extending the applicability of dynamic simulation to biochemical network models.
Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Li, L.; Cao, Z.; Zhou, H.
2017-12-01
Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Automated Simulation Updates based on Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Ward, David G.
2007-01-01
A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Build-Up Approach to Updating the Mock Quiet Spike Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
NASA Astrophysics Data System (ADS)
Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.
2018-06-01
Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.
2017-09-01
Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.
1987-12-31
Maintan -.ountry-TbI e CARDINALiii 134; HL~.,N-L.". 1); bL t i 11L h I...MEMO: FCityCd-Memo SOURCE IS: ’TMAS DAlA MODEL’ ATTRIBUTE IS: MEDIA ’DISK’ SEC-CLASS ’UNCLASSIFIED’ volatility ’DYNAMIC’I UPDATE- METHOD ’INTERACTIVE...8217 ATTRIBUTE IS: MEDIA ’DISK’ SEC-CLASS ’UNCLASSIFIED’ volatility ’STATTO’ UPDATE- METHOD ’INTERACTIVE’ UPDATE-FREQUENCY ’AS PER CHANGE’ Tt.LLE-FORM ’COMMAND’
Walking Distance Estimation Using Walking Canes with Inertial Sensors
Suh, Young Soo
2018-01-01
A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971
Poppenga, Sandra K.; Gesch, Dean B.; Worstell, Bruce B.
2013-01-01
The 1:24,000-scale high-resolution National Hydrography Dataset (NHD) mapped hydrography flow lines require regular updating because land surface conditions that affect surface channel drainage change over time. Historically, NHD flow lines were created by digitizing surface water information from aerial photography and paper maps. Using these same methods to update nationwide NHD flow lines is costly and inefficient; furthermore, these methods result in hydrography that lacks the horizontal and vertical accuracy needed for fully integrated datasets useful for mapping and scientific investigations. Effective methods for improving mapped hydrography employ change detection analysis of surface channels derived from light detection and ranging (LiDAR) digital elevation models (DEMs) and NHD flow lines. In this article, we describe the usefulness of surface channels derived from LiDAR DEMs for hydrography change detection to derive spatially accurate and time-relevant mapped hydrography. The methods employ analyses of horizontal and vertical differences between LiDAR-derived surface channels and NHD flow lines to define candidate locations of hydrography change. These methods alleviate the need to analyze and update the nationwide NHD for time relevant hydrography, and provide an avenue for updating the dataset where change has occurred.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning
NASA Astrophysics Data System (ADS)
Zuberi, M. AH; Pratt, R. G.
2018-04-01
The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.
The influence of learning and updating speed on the growth of commercial websites
NASA Astrophysics Data System (ADS)
Wan, Xiaoji; Deng, Guishi; Bai, Yang; Xue, Shaowei
2012-08-01
In this paper, we study the competition model of commercial websites with learning and updating speed, and further analyze the influence of learning and updating speed on the growth of commercial websites from a nonlinear dynamics perspective. Using the center manifold theory and the normal form method, we give the explicit formulas determining the stability and periodic fluctuation of commercial sites. Numerical simulations reveal that sites periodically fluctuate as the speed of learning and updating crosses one threshold. The study provides reference and evidence for website operators to make decisions.
Fuel consumption modeling in support of ATM environmental decision-making
DOT National Transportation Integrated Search
2009-07-01
The FAA has recently updated the airport terminal : area fuel consumption methods used in its environmental models. : These methods are based on fitting manufacturers fuel : consumption data to empirical equations. The new fuel : consumption metho...
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.
WILDLAND FIRE EMISSION MODELING FOR CMAQ: AN UPDATE
This paper summarizes recent efforts to improve the methods used for modeling wild land fire emissions both for retrospective modeling and real-time forecasting. These improvements focus on the temporal and spatial resolution of the activity data as well as the methods to estimat...
Kunz, Matthew Ross; Ottaway, Joshua; Kalivas, John H; Georgiou, Constantinos A; Mousdis, George A
2011-02-23
Detecting and quantifying extra virgin olive adulteration is of great importance to the olive oil industry. Many spectroscopic methods in conjunction with multivariate analysis have been used to solve these issues. However, successes to date are limited as calibration models are built to a specific set of geographical regions, growing seasons, cultivars, and oil extraction methods (the composite primary condition). Samples from new geographical regions, growing seasons, etc. (secondary conditions) are not always correctly predicted by the primary model due to different olive oil and/or adulterant compositions stemming from secondary conditions not matching the primary conditions. Three Tikhonov regularization (TR) variants are used in this paper to allow adulterant (sunflower oil) concentration predictions in samples from geographical regions not part of the original primary calibration domain. Of the three TR variants, ridge regression with an additional 2-norm penalty provides the smallest validation sample prediction errors. Although the paper reports on using TR for model updating to predict adulterant oil concentration, the methods should also be applicable to updating models distinguishing adulterated samples from pure extra virgin olive oil. Additionally, the approaches are general and can be used with other spectroscopic methods and adulterants as well as with other agriculture products.
2012-03-01
Planetary Boundary Layer POD—Probability of Detection RCA—Rossby Centre Regional Atmospheric Model RMSE—Root Mean Square Error RUC—Rapid Update Cycle SWW...SIGNIFICANCE ....................................1 B. NON-CONVECTIVE WINDS DEFINITIONS AND THRESHOLDS ......4 C . METEOROLOGY ASSOCIATED WITH NON-CONVECTIVE...19 B. RESULTS FROM PREVIOUS STUDIES ON THE WGE METHOD ....21 C . RAPID UPDATE CYCLE (RUC) EMPIRICAL METHOD .....................25 III. DATA AND
Modified methods for growing 3-D skin equivalents: an update.
Lamb, Rebecca; Ambler, Carrie A
2014-01-01
Artificial epidermis can be reconstituted in vitro by seeding primary epidermal cells (keratinocytes) onto a supportive substrate and then growing the developing skin equivalent at the air-liquid interface. In vitro skin models are widely used to study skin biology and for industrial drug and cosmetic testing. Here, we describe updated methods for growing 3-dimensional skin equivalents using de-vitalized, de-epidermalized dermis (DED) substrates including methods for DED substrate preparation, cell seeding, growth conditions, and fixation procedures.
Online Updating of Statistical Inference in the Big Data Setting.
Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui
2016-01-01
We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
NASA Astrophysics Data System (ADS)
Pignalberi, A.; Pezzopane, M.; Rizzi, R.; Galkin, I.
2018-01-01
The first part of this paper reviews methods using effective solar indices to update a background ionospheric model focusing on those employing the Kriging method to perform the spatial interpolation. Then, it proposes a method to update the International Reference Ionosphere (IRI) model through the assimilation of data collected by a European ionosonde network. The method, called International Reference Ionosphere UPdate (IRI UP), that can potentially operate in real time, is mathematically described and validated for the period 9-25 March 2015 (a time window including the well-known St. Patrick storm occurred on 17 March), using IRI and IRI Real Time Assimilative Model (IRTAM) models as the reference. It relies on foF2 and M(3000)F2 ionospheric characteristics, recorded routinely by a network of 12 European ionosonde stations, which are used to calculate for each station effective values of IRI indices IG_{12} and R_{12} (identified as IG_{{12{eff}}} and R_{{12{eff}}}); then, starting from this discrete dataset of values, two-dimensional (2D) maps of IG_{{12{eff}}} and R_{{12{eff}}} are generated through the universal Kriging method. Five variogram models are proposed and tested statistically to select the best performer for each effective index. Then, computed maps of IG_{{12{eff}}} and R_{{12{eff}}} are used in the IRI model to synthesize updated values of foF2 and hmF2. To evaluate the ability of the proposed method to reproduce rapid local changes that are common under disturbed conditions, quality metrics are calculated for two test stations whose measurements were not assimilated in IRI UP, Fairford (51.7°N, 1.5°W) and San Vito (40.6°N, 17.8°E), for IRI, IRI UP, and IRTAM models. The proposed method turns out to be very effective under highly disturbed conditions, with significant improvements of the foF2 representation and noticeable improvements of the hmF2 one. Important improvements have been verified also for quiet and moderately disturbed conditions. A visual analysis of foF2 and hmF2 maps highlights the ability of the IRI UP method to catch small-scale changes occurring under disturbed conditions which are not seen by IRI.
Modeling and dynamic environment analysis technology for spacecraft
NASA Astrophysics Data System (ADS)
Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei
Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.
Nonadditive entropy maximization is inconsistent with Bayesian updating
NASA Astrophysics Data System (ADS)
Pressé, Steve
2014-11-01
The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.
2015-09-08
A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.
Numerical simulation of intelligent compaction technology for construction quality control.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
Simulation and analyses of the aeroassist flight experiment attitude update method
NASA Technical Reports Server (NTRS)
Carpenter, J. R.
1991-01-01
A method which will be used to update the alignment of the Aeroassist Flight Experiment's Inertial Measuring Unit is simulated and analyzed. This method, the Star Line Maneuver, uses measurements from the Space Shuttle Orbiter star trackers along with an extended Kalman filter to estimate a correction to the attitude quaternion maintained by an Inertial Measuring Unit in the Orbiter's payload bay. This quaternion is corrupted by on-orbit bending of the Orbiter payload bay with respect to the Orbiter navigation base, which is incorporated into the payload quaternion when it is initialized via a direct transfer of the Orbiter attitude state. The method of updating this quaternion is examined through verification of baseline cases and Monte Carlo analysis using a simplified simulation, The simulation uses nominal state dynamics and measurement models from the Kalman filter as its real world models, and is programmed on Microvax minicomputer using Matlab, and interactive matrix analysis tool. Results are presented which confirm and augment previous performance studies, thereby enhancing confidence in the Star Line Maneuver design methodology.
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images
NASA Astrophysics Data System (ADS)
Jeong, J.; Kim, T.
2016-06-01
Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.
NASA Astrophysics Data System (ADS)
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
The Cancer Family Caregiving Experience: An Updated and Expanded Conceptual Model
Fletcher, Barbara Swore; Miaskowski, Christine; Given, Barbara; Schumacher, Karen
2011-01-01
Objective The decade from 2000–2010 was an era of tremendous growth in family caregiving research specific to the cancer population. This research has implications for how cancer family caregiving is conceptualized, yet the most recent comprehensive model of cancer family caregiving was published ten years ago. Our objective was to develop an updated and expanded comprehensive model of the cancer family caregiving experience, derived from concepts and variables used in research during past ten years. Methods A conceptual model was developed based on cancer family caregiving research published from 2000–2010. Results Our updated and expanded model has three main elements: 1) the stress process, 2) contextual factors, and 3) the cancer trajectory. Emerging ways of conceptualizing the relationships between and within model elements are addressed, as well as an emerging focus on caregiver-patient dyads as the unit of analysis. Conclusions Cancer family caregiving research has grown dramatically since 2000 resulting in a greatly expanded conceptual landscape. This updated and expanded model of the cancer family caregiving experience synthesizes the conceptual implications of an international body of work and demonstrates tremendous progress in how cancer family caregiving research is conceptualized. PMID:22000812
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Tucker, Gregory E.; Lancaster, Stephen T.; Gasparini, Nicole M.; Bras, Rafael L.; Rybarczyk, Scott M.
2001-10-01
We describe a new set of data structures and algorithms for dynamic terrain modeling using a triangulated irregular network (TINs). The framework provides an efficient method for storing, accessing, and updating a Delaunay triangulation and its associated Voronoi diagram. The basic data structure consists of three interconnected data objects: triangles, nodes, and directed edges. Encapsulating each of these geometric elements within a data object makes it possible to essentially decouple the TIN representation from the modeling applications that make use of it. Both the triangulation and its corresponding Voronoi diagram can be rapidly retrieved or updated, making these methods well suited to adaptive remeshing schemes. We develop a set of algorithms for defining drainage networks and identifying closed depressions (e.g., lakes) for hydrologic and geomorphic modeling applications. We also outline simple numerical algorithms for solving network routing and 2D transport equations within the TIN framework. The methods are illustrated with two example applications, a landscape evolution model and a distributed rainfall-runoff model.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Recommendations for kidney disease guideline updating: a report by the KDIGO Methods Committee
Uhlig, Katrin; Berns, Jeffrey S.; Carville, Serena; Chan, Wiley; Cheung, Michael; Guyatt, Gordon H.; Hart, Allyson; Lewis, Sandra Zelman; Tonelli, Marcello; Webster, Angela C.; Wilt, Timothy J.; Kasiske, Bertram L.
2017-01-01
Updating rather than de novo guideline development now accounts for the majority of guideline activities for many guideline development organizations, including Kidney Disease: Improving Global Outcomes (KDIGO), an international kidney disease guideline development entity that has produced guidelines on kidney diseases since 2008. Increasingly, guideline developers are moving away from updating at fixed intervals in favor of more flexible approaches that use periodic expert assessment of guideline currency (with or without an updated systematic review) to determine the need for updating. Determining the need for guideline updating in an efficient, transparent, and timely manner is challenging, and updating of systematic reviews and guidelines is labor intensive. Ideally, guidelines should be updated dynamically when new evidence indicates a need for a substantive change in the guideline based on a priori criteria. This dynamic updating (sometimes referred to as a living guideline model) can be facilitated with the use of integrated electronic platforms that allow updating of specific recommendations. This report summarizes consensus-based recommendations from a panel of guideline methodology professionals on how to keep KDIGO guidelines up to date. PMID:26994574
3D Reconstruction of human bones based on dictionary learning.
Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin
2017-11-01
An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
NASA Astrophysics Data System (ADS)
Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath
2018-07-01
One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Regan, R. Steve; LaFontaine, Jacob H.
2017-10-05
This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao
2018-03-01
Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
Towards a more transparent HTA process in Poland: new Polish HTA methodological guidelines
Lach, Krzysztof; Dziwisz, Michal; Rémuzat, Cécile; Toumi, Mondher
2017-01-01
ABSTRACT Introduction: Health technology assessment (HTA) in Poland supports reimbursement decisions via the Polish HTA Agency (AOTMiT), whose guidelines were updated in 2016. Methods: We identified key changes introduced by the update and, before guideline publication, analysed discrepancies between AOTMiT assessments and the submitting marketing authorisation holders (MAHs) to elucidate the context of the update. We compared the clarity and detail of the new guidelines versus those of the UK’s National Institute for Health and Care Excellence (NICE). Results: The update specified more precise requirements for items such as indirect comparison or input data for economic modelling. Agency–MAH discrepancies relating to the subjects of the HTA update were found in 14.6% of published documents. The new Polish HTA guidelines were as clear and detailed as NICE’s on topics such as assessing quality of evidence and economic modelling, but were less informative when describing (for example) pairwise meta-analysis. Conclusions: The Polish HTA guidelines update demonstrates lessons learned from internal and external experiences. The new guidelines adhere more closely to UK HTA standards, being clearer and more informative. While the update is expected to reduce Agency–MAH discrepancies, there remain areas for development, such as providing templates to aid HTA submissions. PMID:28804603
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
Frequency response function (FRF) based updating of a laser spot welded structure
NASA Astrophysics Data System (ADS)
Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.
2018-04-01
The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.
Update on ɛK with lattice QCD inputs
NASA Astrophysics Data System (ADS)
Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon
2018-03-01
We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2016-12-01
Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
A response surface methodology based damage identification technique
NASA Astrophysics Data System (ADS)
Fang, S. E.; Perera, R.
2009-06-01
Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.
A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS
A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...
Forecasting daily streamflow using online sequential extreme learning machines
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
Objective analysis of observational data from the FGGE observing systems
NASA Technical Reports Server (NTRS)
Baker, W.; Edelmann, D.; Iredell, M.; Han, D.; Jakkempudi, S.
1981-01-01
An objective analysis procedure for updating the GLAS second and fourth order general atmospheric circulation models using observational data from the first GARP global experiment is described. The objective analysis procedure is based on a successive corrections method and the model is updated in a data assimilation cycle. Preparation of the observational data for analysis and the objective analysis scheme are described. The organization of the program and description of the required data sets are presented. The program logic and detailed descriptions of each subroutine are given.
NASA Astrophysics Data System (ADS)
Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju
2018-02-01
Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
NASA Astrophysics Data System (ADS)
Martins, J. M. P.; Thuillier, S.; Andrade-Campos, A.
2018-05-01
The identification of material parameters, for a given constitutive model, can be seen as the first step before any practical application. In the last years, the field of material parameters identification received an important boost with the development of full-field measurement techniques, such as Digital Image Correlation. These techniques enable the use of heterogeneous displacement/strain fields, which contain more information than the classical homogeneous tests. Consequently, different techniques have been developed to extract material parameters from full-field measurements. In this study, two of these techniques are addressed, the Finite Element Model Updating (FEMU) and the Virtual Fields Method (VFM). The main idea behind FEMU is to update the parameters of a constitutive model implemented in a finite element model until both numerical and experimental results match, whereas VFM makes use of the Principle of Virtual Work and does not require any finite element simulation. Though both techniques proved their feasibility in linear and non-linear constitutive models, it is rather difficult to rank their robustness in plasticity. The purpose of this work is to perform a comparative study in the case of elasto-plastic models. Details concerning the implementation of each strategy are presented. Moreover, a dedicated code for VFM within a large strain framework is developed. The reconstruction of the stress field is performed through a user subroutine. A heterogeneous tensile test is considered to compare FEMU and VFM strategies.
Kennedy, Paula L; Woodbury, Allan D
2002-01-01
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
A land cover change detection and classification protocol for updating Alaska NLCD 2001 to 2011
Jin, Suming; Yang, Limin; Zhu, Zhe; Homer, Collin G.
2017-01-01
Monitoring and mapping land cover changes are important ways to support evaluation of the status and transition of ecosystems. The Alaska National Land Cover Database (NLCD) 2001 was the first 30-m resolution baseline land cover product of the entire state derived from circa 2001 Landsat imagery and geospatial ancillary data. We developed a comprehensive approach named AKUP11 to update Alaska NLCD from 2001 to 2011 and provide a 10-year cyclical update of the state's land cover and land cover changes. Our method is designed to characterize the main land cover changes associated with different drivers, including the conversion of forests to shrub and grassland primarily as a result of wildland fire and forest harvest, the vegetation successional processes after disturbance, and changes of surface water extent and glacier ice/snow associated with weather and climate changes. For natural vegetated areas, a component named AKUP11-VEG was developed for updating the land cover that involves four major steps: 1) identify the disturbed and successional areas using Landsat images and ancillary datasets; 2) update the land cover status for these areas using a SKILL model (System of Knowledge-based Integrated-trajectory Land cover Labeling); 3) perform decision tree classification; and 4) develop a final land cover and land cover change product through the postprocessing modeling. For water and ice/snow areas, another component named AKUP11-WIS was developed for initial land cover change detection, removal of the terrain shadow effects, and exclusion of ephemeral snow changes using a 3-year MODIS snow extent dataset from 2010 to 2012. The overall approach was tested in three pilot study areas in Alaska, with each area consisting of four Landsat image footprints. The results from the pilot study show that the overall accuracy in detecting change and no-change is 90% and the overall accuracy of the updated land cover label for 2011 is 86%. The method provided a robust, consistent, and efficient means for capturing major disturbance events and updating land cover for Alaska. The method has subsequently been applied to generate the land cover and land cover change products for the entire state of Alaska.
Recent Updates of A Multi-Phase Transport (AMPT) Model
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
2008-10-01
We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Updating of states in operational hydrological models
NASA Astrophysics Data System (ADS)
Bruland, O.; Kolberg, S.; Engeland, K.; Gragne, A. S.; Liston, G.; Sand, K.; Tøfte, L.; Alfredsen, K.
2012-04-01
Operationally the main purpose of hydrological models is to provide runoff forecasts. The quality of the model state and the accuracy of the weather forecast together with the model quality define the runoff forecast quality. Input and model errors accumulate over time and may leave the model in a poor state. Usually model states can be related to observable conditions in the catchment. Updating of these states, knowing their relation to observable catchment conditions, influence directly the forecast quality. Norway is internationally in the forefront in hydropower scheduling both on short and long terms. The inflow forecasts are fundamental to this scheduling. Their quality directly influence the producers profit as they optimize hydropower production to market demand and at the same time minimize spill of water and maximize available hydraulic head. The quality of the inflow forecasts strongly depends on the quality of the models applied and the quality of the information they use. In this project the focus has been to improve the quality of the model states which the forecast is based upon. Runoff and snow storage are two observable quantities that reflect the model state and are used in this project for updating. Generally the methods used can be divided in three groups: The first re-estimates the forcing data in the updating period; the second alters the weights in the forecast ensemble; and the third directly changes the model states. The uncertainty related to the forcing data through the updating period is due to both uncertainty in the actual observation and to how well the gauging stations represent the catchment both in respect to temperatures and precipitation. The project looks at methodologies that automatically re-estimates the forcing data and tests the result against observed response. Model uncertainty is reflected in a joint distribution of model parameters estimated using the Dream algorithm.
Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H
2014-01-01
Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms
Bailey, Heather R.; Zacks, Jeffrey M.
2015-01-01
Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248
On the implicit density based OpenFOAM solver for turbulent compressible flows
NASA Astrophysics Data System (ADS)
Fürst, Jiří
The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.
Impact of mobility structure on optimization of small-world networks of mobile agents
NASA Astrophysics Data System (ADS)
Lee, Eun; Holme, Petter
2016-06-01
In ad hoc wireless networking, units are connected to each other rather than to a central, fixed, infrastructure. Constructing and maintaining such networks create several trade-off problems between robustness, communication speed, power consumption, etc., that bridges engineering, computer science and the physics of complex systems. In this work, we address the role of mobility patterns of the agents on the optimal tuning of a small-world type network construction method. By this method, the network is updated periodically and held static between the updates. We investigate the optimal updating times for different scenarios of the movement of agents (modeling, for example, the fat-tailed trip distances, and periodicities, of human travel). We find that these mobility patterns affect the power consumption in non-trivial ways and discuss how these effects can best be handled.
Variations on the Game of Life
NASA Astrophysics Data System (ADS)
Peper, Ferdinand; Adachi, Susumu; Lee, Jia
The Game of Life is defined in the framework of Cellular Automata with discrete states that are updated synchronously. Though this in itself has proven to be fertile ground for research, it leaves open questions regarding the robustness of the model with respect to variations in updating methods, cell state representations, neighborhood definitions, etc. These questions may become important when the ideal conditions under which the Game of Life is supposed to operate cannot be satisfied, like in physical realizations. This chapter describes three models in which Game of Life-like behavior is obtained, even though some basic tenets are violated.
Vijayakumar, Supreeta; Conway, Max; Lió, Pietro; Angione, Claudio
2017-05-30
Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a 'forest' of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridize and become more complex. We then provide the first hands-on tutorial for multi-objective optimization of metabolic models in R. We finally discuss the implementation of multi-view machine learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimization of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning and a step-by-step R tutorial should be of interest for both the beginner and the advanced user. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Physics-Based Vibrotactile Feedback Library for Collision Events.
Park, Gunhyuk; Choi, Seungmoon
2017-01-01
We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.
DAMIT: a database of asteroid models
NASA Astrophysics Data System (ADS)
Durech, J.; Sidorin, V.; Kaasalainen, M.
2010-04-01
Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Finite element modelling and updating of a lively footbridge: The complete process
NASA Astrophysics Data System (ADS)
Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul
2007-03-01
The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.
NASA Astrophysics Data System (ADS)
Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.
2012-12-01
This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of temporal correlation structure, Hydrol. Earth Syst. Sci. Discuss., 9, 3087-3127, doi:10.5194/hessd-9-3087-2012, 2012b.
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.
Li, Yuan-Xin; Yang, Guang-Hong
2018-04-01
This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.
Damage identification via asymmetric active magnetic bearing acceleration feedback control
NASA Astrophysics Data System (ADS)
Zhao, Jie; DeSmidt, Hans; Yao, Wei
2015-04-01
A Floquet-based damage detection methodology for cracked rotor systems is developed and demonstrated on a shaft-disk system. This approach utilizes measured changes in the system natural frequencies to estimate the severity and location of shaft structural cracks during operation. The damage detection algorithms are developed with the initial guess solved by least square method and iterative damage parameter vector by updating the eigenvector updating. Active Magnetic Bearing is introduced to break the symmetric structure of rotor system and the tuning range of proper stiffness/virtual mass gains is studied. The system model is built based on energy method and the equations of motion are derived by applying assumed modes method and Lagrange Principle. In addition, the crack model is based on the Strain Energy Release Rate (SERR) concept in fracture mechanics. Finally, the method is synthesized via harmonic balance and numerical examples for a shaft/disk system demonstrate the effectiveness in detecting both location and severity of the structural damage.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Rezk, Mamdouh R.; Omran, Yasmin Rostom
2015-04-01
Smart and novel spectrophotometric and chemometric methods have been developed and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and dexamethasone sodium phosphate (DSP) in presence of interfering substances without prior separation. The first method depends upon derivative subtraction coupled with constant multiplication. The second one is ratio difference method at optimum wavelengths which were selected after applying derivative transformation method via multiplying by a decoding spectrum in order to cancel the contribution of non labeled interfering substances. The third method relies on partial least squares with regression model updating. They are so simple that they do not require any preliminary separation steps. Accuracy, precision and linearity ranges of these methods were determined. Moreover, specificity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied for analysis of both drugs in their pharmaceutical formulation. The obtained results have been statistically compared to that of an official spectrophotometric method to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Mastin, Mark
2012-01-01
A previous collaborative effort between the U.S. Geological Survey and the Bureau of Reclamation resulted in a watershed model for four watersheds that discharge into Potholes Reservoir, Washington. Since the model was constructed, two new meteorological sites have been established that provide more reliable real-time information. The Bureau of Reclamation was interested in incorporating this new information into the existing watershed model developed in 2009, and adding measured snowpack information to update simulated results and to improve forecasts of runoff. This report includes descriptions of procedures to aid a user in making model runs, including a description of the Object User Interface for the watershed model with details on specific keystrokes to generate model runs for the contributing basins. A new real-time, data-gathering computer program automates the creation of the model input files and includes the new meteorological sites. The 2009 watershed model was updated with the new sites and validated by comparing simulated results to measured data. As in the previous study, the updated model (2012 model) does a poor job of simulating individual storms, but a reasonably good job of simulating seasonal runoff volumes. At three streamflow-gaging stations, the January 1 to June 30 retrospective forecasts of runoff volume for years 2010 and 2011 were within 40 percent of the measured runoff volume for five of the six comparisons, ranging from -39.4 to 60.3 percent difference. A procedure for collecting measured snowpack data and using the data in the watershed model for forecast model runs, based on the Ensemble Streamflow Prediction method, is described, with an example that uses 2004 snow-survey data.
The Use of Multiple Data Sources in the Process of Topographic Maps Updating
NASA Astrophysics Data System (ADS)
Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.
2016-06-01
The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the same time, through the generalization and additional data sources of the Romanian 1:50 000 scale map. This paper also investigates the general perspective of DTM automatic use derived products in the process of updating the topographic maps.
Budget Online Learning Algorithm for Least Squares SVM.
Jian, Ling; Shen, Shuqian; Li, Jundong; Liang, Xijun; Li, Lei
2017-09-01
Batch-mode least squares support vector machine (LSSVM) is often associated with unbounded number of support vectors (SVs'), making it unsuitable for applications involving large-scale streaming data. Limited-scale LSSVM, which allows efficient updating, seems to be a good solution to tackle this issue. In this paper, to train the limited-scale LSSVM dynamically, we present a budget online LSSVM (BOLSSVM) algorithm. Methodologically, by setting a fixed budget for SVs', we are able to update the LSSVM model according to the updated SVs' set dynamically without retraining from scratch. In particular, when a new small chunk of SVs' substitute for the old ones, the proposed algorithm employs a low rank correction technology and the Sherman-Morrison-Woodbury formula to compute the inverse of saddle point matrix derived from the LSSVM's Karush-Kuhn-Tucker (KKT) system, which, in turn, updates the LSSVM model efficiently. In this way, the proposed BOLSSVM algorithm is especially useful for online prediction tasks. Another merit of the proposed BOLSSVM is that it can be used for k -fold cross validation. Specifically, compared with batch-mode learning methods, the computational complexity of the proposed BOLSSVM method is significantly reduced from O(n 4 ) to O(n 3 ) for leave-one-out cross validation with n training samples. The experimental results of classification and regression on benchmark data sets and real-world applications show the validity and effectiveness of the proposed BOLSSVM algorithm.
Revisions to some parameters used in stochastic-method simulations of ground motion
Boore, David; Thompson, Eric M.
2015-01-01
The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.
Documentation for the 2014 update of the United States national seismic hazard maps
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.
2014-01-01
The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.
A new frequency matching technique for FRF-based model updating
NASA Astrophysics Data System (ADS)
Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng
2017-05-01
Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.
The LANDFIRE Refresh strategy: updating the national dataset
Nelson, Kurtis J.; Connot, Joel A.; Peterson, Birgit E.; Martin, Charley
2013-01-01
The LANDFIRE Program provides comprehensive vegetation and fuel datasets for the entire United States. As with many large-scale ecological datasets, vegetation and landscape conditions must be updated periodically to account for disturbances, growth, and natural succession. The LANDFIRE Refresh effort was the first attempt to consistently update these products nationwide. It incorporated a combination of specific systematic improvements to the original LANDFIRE National data, remote sensing based disturbance detection methods, field collected disturbance information, vegetation growth and succession modeling, and vegetation transition processes. This resulted in the creation of two complete datasets for all 50 states: LANDFIRE Refresh 2001, which includes the systematic improvements, and LANDFIRE Refresh 2008, which includes the disturbance and succession updates to the vegetation and fuel data. The new datasets are comparable for studying landscape changes in vegetation type and structure over a decadal period, and provide the most recent characterization of fuel conditions across the country. The applicability of the new layers is discussed and the effects of using the new fuel datasets are demonstrated through a fire behavior modeling exercise using the 2011 Wallow Fire in eastern Arizona as an example.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Timing Interactions in Social Simulations: The Voter Model
NASA Astrophysics Data System (ADS)
Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San
The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.
Katzman, Braden; Tang, Doris; Santella, Anthony; Bao, Zhirong
2018-04-04
AceTree, a software application first released in 2006, facilitates exploration, curation and editing of tracked C. elegans nuclei in 4-dimensional (4D) fluorescence microscopy datasets. Since its initial release, AceTree has been continuously used to interact with, edit and interpret C. elegans lineage data. In its 11 year lifetime, AceTree has been periodically updated to meet the technical and research demands of its community of users. This paper presents the newest iteration of AceTree which contains extensive updates, demonstrates the new applicability of AceTree in other developmental contexts, and presents its evolutionary software development paradigm as a viable model for maintaining scientific software. Large scale updates have been made to the user interface for an improved user experience. Tools have been grouped according to functionality and obsolete methods have been removed. Internal requirements have been changed that enable greater flexibility of use both in C. elegans contexts and in other model organisms. Additionally, the original 3-dimensional (3D) viewing window has been completely reimplemented. The new window provides a new suite of tools for data exploration. By responding to technical advancements and research demands, AceTree has remained a useful tool for scientific research for over a decade. The updates made to the codebase have extended AceTree's applicability beyond its initial use in C. elegans and enabled its usage with other model organisms. The evolution of AceTree demonstrates a viable model for maintaining scientific software over long periods of time.
Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit
NASA Technical Reports Server (NTRS)
Barry, B. F.; Pimm, R. S.; Rowe, C. K.
1971-01-01
In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
Bootstrap data methodology for sequential hybrid model building
NASA Technical Reports Server (NTRS)
Volponi, Allan J. (Inventor); Brotherton, Thomas (Inventor)
2007-01-01
A method for modeling engine operation comprising the steps of: 1. collecting a first plurality of sensory data, 2. partitioning a flight envelope into a plurality of sub-regions, 3. assigning the first plurality of sensory data into the plurality of sub-regions, 4. generating an empirical model of at least one of the plurality of sub-regions, 5. generating a statistical summary model for at least one of the plurality of sub-regions, 6. collecting an additional plurality of sensory data, 7. partitioning the second plurality of sensory data into the plurality of sub-regions, 8. generating a plurality of pseudo-data using the empirical model, and 9. concatenating the plurality of pseudo-data and the additional plurality of sensory data to generate an updated empirical model and an updated statistical summary model for at least one of the plurality of sub-regions.
Steps towards Improving GNSS Systematic Errors and Biases
NASA Astrophysics Data System (ADS)
Herring, T.; Moore, M.
2017-12-01
Four general areas of analysis method improvements, three related to data analysis models and the fourth to calibration methods, have been recommended at the recent unified analysis workshop (UAW) and we discuss aspects of these areas for improvement. The gravity fields used in the GNSS orbit integrations should be updated to match modern fields to make them consistent with the fields being used by the other IAG services. The update would include the static part of the field and a time variable component. The force models associated with radiation forces are the most uncertain and modeling of these forces can be made more consistent with the exchange of attitude information. The international GNSS service (IGS) will develop an attitude format and make attitude information available so that analysis centers can validate their models. The IGS has noted the appearance of the GPS draconitic period and harmonics of this period in time series of various geodetic products (e.g., positions and Earth orientation parameters). An updated short-period (diurnal and semidiurnal) model is needed and a method to determine the best model developed. The final area, not directly related to analysis models, is the recommendation that site dependent calibration of GNSS antennas are needed since these have a direct effect on the ITRF realization and position offsets when antennas are changed. Evaluation of the effects of the use of antenna specific phase center models will be investigated for those sites where these values are available without disturbing an existing antenna installation. Potential development of an in-situ antenna calibration system is strongly encouraged. In-situ calibration would be deployed at core sites where GNSS sites are tied to other geodetic systems. With recent expansion of the number of GPS satellites transmitting unencrypted codes on the GPS L2 frequency and the availability of software GNSS receivers in-situ calibration between an existing installation and a movable directional antenna is now more likely to generate accurate results than earlier analog switching systems. With all of these improvements, there is the expectation that there will be better agreement between the space geodetic methods thus allowing more definitive assessment and modeling of the Earth's time variable shape and gravity field.
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
DOE Office of Scientific and Technical Information (OSTI.GOV)
OHara J. M.; Higgins, J.; Fleger, S.
The U.S. Nuclear Regulatory Commission (NRC) reviews the human factors engineering (HFE) programs of applicants for nuclear power plant construction permits, operating licenses, standard design certifications, and combined operating licenses. The purpose of these safety reviews is to help ensure that personnel performance and reliability are appropriately supported. Detailed design review procedures and guidance for the evaluations is provided in three key documents: the Standard Review Plan (NUREG-0800), the HFE Program Review Model (NUREG-0711), and the Human-System Interface Design Review Guidelines (NUREG-0700). These documents were last revised in 2007, 2004 and 2002, respectively. The NRC is committed to the periodicmore » update and improvement of the guidance to ensure that it remains a state-of-the-art design evaluation tool. To this end, the NRC is updating its guidance to stay current with recent research on human performance, advances in HFE methods and tools, and new technology being employed in plant and control room design. NUREG-0711 is the first document to be addressed. We present the methodology used to update NUREG-0711 and summarize the main changes made. Finally, we discuss the current status of the update program and the future plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juxiu Tong; Bill X. Hu; Hai Huang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less
Diffusive Public Goods and Coexistence of Cooperators and Cheaters on a 1D Lattice
Scheuring, István
2014-01-01
Many populations of cells cooperate through the production of extracellular materials. These materials (enzymes, siderophores) spread by diffusion and can be applied by both the cooperator and cheater (non-producer) cells. In this paper the problem of coexistence of cooperator and cheater cells is studied on a 1D lattice where cooperator cells produce a diffusive material which is beneficial to the individuals according to the local concentration of this public good. The reproduction success of a cell increases linearly with the benefit in the first model version and increases non-linearly (saturates) in the second version. Two types of update rules are considered; either the cooperative cell stops producing material before death (death-production-birth, DpB) or it produces the common material before it is selected to die (production-death-birth, pDB). The empty space is occupied by its neighbors according to their replication rates. By using analytical and numerical methods I have shown that coexistence of the cooperator and cheater cells is possible although atypical in the linear version of this 1D model if either DpB or pDB update rule is assumed. While coexistence is impossible in the non-linear model with pDB update rule, it is one of the typical behaviors in case of the non-linear model with DpB update rule. PMID:25025985
Particle tracing modeling of ion fluxes at geosynchronous orbit during substorms
NASA Astrophysics Data System (ADS)
Brito, T. V.; Jordanova, V.; Woodroffe, J. R.; Henderson, M. G.; Morley, S.; Birn, J.
2016-12-01
The SHIELDS project aims to couple a host of different models for different regions of the magnetosphere using different numerical methods such as MHD, PIC and particle tracing, with the ultimate goal of having a more realistic model of the whole magnetospheric environment capturing, as much as possible, the different physics of the various plasma populations. In that context, we present a modeling framework that can be coupled with a global MHD model to calculate particle fluxes in the inner magnetosphere, which can in turn be used to constantly update the input for a ring current model. In that regard, one advantage of that approach over using spacecraft data is that it produces a much better spatial and temporal coverage of the nightside geosynchronous region and thus a possibly more complete input for the ring current model, which will likely produce more accurate global results for the ring current population. In this presentation, we will describe the particle tracing method in more detail, describe the method used to couple it to the BATS-R-US 3D global MHD code, and the method used to update the flux results to the RAM-SCB ring current model. We will also present the simulation results for the July 18, 2013 period, which showed significant substorm activity. We will compare simulated ion fluxes on the nightside magnetosphere with spacecraft observations to gauge how well our simulations are capturing substorm dynamics.
Identification of Historical Veziragasi Aqueduct Using the Operational Modal Analysis
Ercan, E.; Nuhoglu, A.
2014-01-01
This paper describes the results of a model updating study conducted on a historical aqueduct, called Veziragasi, in Turkey. The output-only modal identification results obtained from ambient vibration measurements of the structure were used to update a finite element model of the structure. For the purposes of developing a solid model of the structure, the dimensions of the structure, defects, and material degradations in the structure were determined in detail by making a measurement survey. For evaluation of the material properties of the structure, nondestructive and destructive testing methods were applied. The modal analysis of the structure was calculated by FEM. Then, a nondestructive dynamic test as well as operational modal analysis was carried out and dynamic properties were extracted. The natural frequencies and corresponding mode shapes were determined from both theoretical and experimental modal analyses and compared with each other. A good harmony was attained between mode shapes, but there were some differences between natural frequencies. The sources of the differences were introduced and the FEM model was updated by changing material parameters and boundary conditions. Finally, the real analytical model of the aqueduct was put forward and the results were discussed. PMID:24511287
NASA Technical Reports Server (NTRS)
Tsang, Leung; Hwang, Jenq-Neng
1996-01-01
A method to incorporate passive microwave remote sensing measurements within a spatially distributed snow hydrology model to provide estimates of the spatial distribution of Snow Water Equivalent (SWE) as a function of time is implemented. The passive microwave remote sensing measurements are at 25 km resolution. However, in mountain regions the spatial variability of SWE over a 25 km footprint is large due to topographic influences. On the other hand, the snow hydrology model has built-in topographic information and the capability to estimate SWE at a 1 km resolution. In our work, the snow hydrology SWE estimates are updated and corrected using SSM/I passive microwave remote sensing measurements. The method is applied to the Upper Rio Grande River Basin in the mountains of Colorado. The change in prediction of SWE from hydrology modeling with and without updating is compared with measurements from two SNOTEL sites in and near the basin. The results indicate that the method incorporating the remote sensing measurements into the hydrology model is able to more closely estimate the temporal evolution of the measured values of SWE as a function of time.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!
NASA Astrophysics Data System (ADS)
Hassan Saddagh, Mohammad; Javad Abedini, Mohammad
2010-05-01
Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.
Updating National Topographic Data Base Using Change Detection Methods
NASA Astrophysics Data System (ADS)
Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.
2016-06-01
The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
NASA Astrophysics Data System (ADS)
Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.
2010-12-01
We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The research described here was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
Different regulatory schemes worldwide, and in particular the preparation for the new REACH legislation in Europe, increase the reliance on estimation methods for predicting potential chemical hazard.
Bayesian updating in a fault tree model for shipwreck risk assessment.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M
2017-07-15
Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made. Copyright © 2017 Elsevier B.V. All rights reserved.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, mercedes C.
2006-01-01
The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.
On Theoretical Limits of Dynamic Model Updating Using a Sensitivity-Based Approach
NASA Astrophysics Data System (ADS)
GOLA, M. M.; SOMÀ, A.; BOTTO, D.
2001-07-01
The present work deals with the determination of the newly discovered conditions necessary for model updating with the eigensensitivity approach. The treatment concerns the maximum number of identifiable parameters regarding the structure of the eigenvectors derivatives. A mathematical demonstration is based on the evaluation of the rank of the least-squares matrix and produces the algebraic limiting conditions. Numerical application to a lumped parameter structure is employed to validate the mathematical limits taking into account different subsets of mode shapes. The demonstration is extended to the calculation of the eigenvector derivatives with both the Fox and Kapoor, and Nelson methods. III conditioning of the least-squares sensitivity matrix is revealed through the covariance jump.
Future-year ozone prediction for the United States using updated models and inputs.
Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun
2017-08-01
The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.
Weijs, Liesbeth; Yang, Raymond S H; Das, Krishna; Covaci, Adrian; Blust, Ronny
2013-05-07
Physiologically based pharmacokinetic (PBPK) modeling in marine mammals is a challenge because of the lack of parameter information and the ban on exposure experiments. To minimize uncertainty and variability, parameter estimation methods are required for the development of reliable PBPK models. The present study is the first to develop PBPK models for the lifetime bioaccumulation of p,p'-DDT, p,p'-DDE, and p,p'-DDD in harbor porpoises. In addition, this study is also the first to apply the Bayesian approach executed with Markov chain Monte Carlo simulations using two data sets of harbor porpoises from the Black and North Seas. Parameters from the literature were used as priors for the first "model update" using the Black Sea data set, the resulting posterior parameters were then used as priors for the second "model update" using the North Sea data set. As such, PBPK models with parameters specific for harbor porpoises could be strengthened with more robust probability distributions. As the science and biomonitoring effort progress in this area, more data sets will become available to further strengthen and update the parameters in the PBPK models for harbor porpoises as a species anywhere in the world. Further, such an approach could very well be extended to other protected marine mammals.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Large scale operational areas often require multiple service robots for coverage and task parallelism. In such scenarios, each robot keeps its individual map of the environment and serves specific areas of the map at different times. We propose a knowledge sharing mechanism for multiple robots in which one robot can inform other robots about the changes in map, like path blockage, or new static obstacles, encountered at specific areas of the map. This symbiotic information sharing allows the robots to update remote areas of the map without having to explicitly navigate those areas, and plan efficient paths. A node representation of paths is presented for seamless sharing of blocked path information. The transience of obstacles is modeled to track obstacles which might have been removed. A lazy information update scheme is presented in which only relevant information affecting the current task is updated for efficiency. The advantages of the proposed method for path planning are discussed against traditional method with experimental results in both simulation and real environments. PMID:28678193
Rapid Structural Design Change Evaluation with AN Experiment Based FEM
NASA Astrophysics Data System (ADS)
Chu, C.-H.; Trethewey, M. W.
1998-04-01
The work in this paper proposes a dynamic structural design model that can be developed in a rapid fashion. The approach endeavours to produce a simplified FEM developed in conjunction with an experimental modal database. The FEM is formulated directly from the geometry and connectivity used in an experimental modal test using beam/frame elements. The model sacrifices fine detail for a rapid development time. The FEM is updated at the element level so the dynamic response replicates the experimental results closely. The physical attributes of the model are retained, making it well suited to evaluate the effect of potential design changes. The capabilities are evaluated in a series of computational and laboratory tests. First, a study is performed with a simulated cantilever beam with a variable mass and stiffness distribution. The modal characteristics serve as the updating target with random noise added to simulate experimental uncertainty. A uniformly distributed FEM is developed and updated. The results show excellent results, all natural frequencies are within 0·001% with MAC values above 0·99. Next, the method is applied to predict the dynamic changes of a hardware portal frame structure for a radical design change. Natural frequency predictions from the original FEM differ by as much as almost 18% with reasonable MAC values. The results predicted from the updated model produce excellent results when compared to the actual hardware changes, the first five modal natural frequency difference is around 5% and the corresponding mode shapes producing MAC values above 0·98.
Sonora: A New Generation Model Atmosphere Grid for Brown Dwarfs and Young Extrasolar Giant Planets
NASA Astrophysics Data System (ADS)
Marley, Mark S.; Saumon, Didier; Fortney, Jonathan J.; Morley, Caroline; Lupu, Roxana E.; Freedman, Richard; Visscher, Channon
2017-06-01
Brown dwarf and giant planet atmospheric structure and composition has been studied both by forward models and, increasingly so, by retrieval methods. While indisputably informative, retrieval methods are of greatest value when judged in the context of grid model predictions. Meanwhile retrieval models can test the assumptions inherent in the forward modeling procedure.In order to provide a new, systematic survey of brown dwarf atmospheric structure, emergent spectra, and evolution, we have constructed a new grid of brown dwarf model atmospheres. We ultimately aim for our grid to span substantial ranges of atmospheric metallilcity, C/O ratios, cloud properties, atmospheric mixing, and other parameters. Spectra predicted by our modeling grid can be compared to both observations and retrieval results to aid in the interpretation and planning of future telescopic observations.We thus present Sonora, a new generation of substellar atmosphere models, appropriate for application to studies of L, T, and Y-type brown dwarfs and young extrasolar giant planets. The models describe the expected temperature-pressure profile and emergent spectra of an atmosphere in radiative-convective equilibrium for ranges of effective temperatures and gravities encompassing 200 ≤ Teff ≤ 2400 K and 2.5 ≤ log g ≤ 5.5. In our poster we briefly describe our modeling methodology, enumerate various updates since our group's previous models, and present our initial tranche of models for cloudless, solar metallicity, and solar carbon-to-oxygen ratio, chemical equilibrium atmospheres. These models will be available online and will be updated as opacities and cloud modeling methods continue to improve.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
NASA Astrophysics Data System (ADS)
Deng, Bin; Shen, ZhiBin; Duan, JingBo; Tang, GuoJin
2014-05-01
This paper studies the damage-viscoelastic behavior of composite solid propellants of solid rocket motors (SRM). Based on viscoelastic theories and strain equivalent hypothesis in damage mechanics, a three-dimensional (3-D) nonlinear viscoelastic constitutive model incorporating with damage is developed. The resulting viscoelastic constitutive equations are numerically discretized by integration algorithm, and a stress-updating method is presented by solving nonlinear equations according to the Newton-Raphson method. A material subroutine of stress-updating is made up and embedded into commercial code of Abaqus. The material subroutine is validated through typical examples. Our results indicate that the finite element results are in good agreement with the analytical ones and have high accuracy, and the suggested method and designed subroutine are efficient and can be further applied to damage-coupling structural analysis of practical SRM grain.
Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo
2017-07-01
This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
NASA Astrophysics Data System (ADS)
Bertholet, Jenny; Toftegaard, Jakob; Hansen, Rune; Worm, Esben S.; Wan, Hanlin; Parikh, Parag J.; Weber, Britta; Høyer, Morten; Poulsen, Per R.
2018-03-01
The purpose of this study was to develop, validate and clinically demonstrate fully automatic tumour motion monitoring on a conventional linear accelerator by combined optical and sparse monoscopic imaging with kilovoltage x-rays (COSMIK). COSMIK combines auto-segmentation of implanted fiducial markers in cone-beam computed tomography (CBCT) projections and intra-treatment kV images with simultaneous streaming of an external motion signal. A pre-treatment CBCT is acquired with simultaneous recording of the motion of an external marker block on the abdomen. The 3-dimensional (3D) marker motion during the CBCT is estimated from the auto-segmented positions in the projections and used to optimize an external correlation model (ECM) of internal motion as a function of external motion. During treatment, the ECM estimates the internal motion from the external motion at 20 Hz. KV images are acquired every 3 s, auto-segmented, and used to update the ECM for baseline shifts between internal and external motion. The COSMIK method was validated using Calypso-recorded internal tumour motion with simultaneous camera-recorded external motion for 15 liver stereotactic body radiotherapy (SBRT) patients. The validation included phantom experiments and simulations hereof for 12 fractions and further simulations for 42 fractions. The simulations compared the accuracy of COSMIK with ECM-based monitoring without model updates and with model updates based on stereoscopic imaging as well as continuous kilovoltage intrafraction monitoring (KIM) at 10 Hz without an external signal. Clinical real-time tumour motion monitoring with COSMIK was performed offline for 14 liver SBRT patients (41 fractions) and online for one patient (two fractions). The mean 3D root-mean-square error for the four monitoring methods was 1.61 mm (COSMIK), 2.31 mm (ECM without updates), 1.49 mm (ECM with stereoscopic updates) and 0.75 mm (KIM). COSMIK is the first combined kV/optical real-time motion monitoring method used clinically online on a conventional accelerator. COSMIK gives less imaging dose than KIM and is in addition applicable when the kV imager cannot be deployed such as during non-coplanar fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We present a comprehensive and updated comparison with cosmological observations of two non-local modifications of gravity previously introduced by our group, the so called RR and RT models. We implement the background evolution and the cosmological perturbations of the models in a modified Boltzmann code, using CLASS. We then test the non-local models against the Planck 2015 TT, TE, EE and Cosmic Microwave Background (CMB) lensing data, isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H {sub 0} measurements and growth rate data, and we perform Bayesian parameter estimation. We then compare the RR, RT and ΛCDM models,more » using the Savage-Dickey method. We find that the RT model and ΛCDM perform equally well, while the performance of the RR model with respect to ΛCDM depends on whether or not we include a prior on H {sub 0} based on local measurements.« less
Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.
Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
At the 12th Annual CMAS Conference initial results from the application of the coupled WRF-CMAQ modeling system to the 2011 Baltimore-Washington D.C. DISCOVER-AQ campaign were presented, with the focus on updates and new methods applied to the WRF modeling for fine-scale applicat...
A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling
Zhang, Chunxi; Lin, Tie
2016-01-01
In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method. PMID:27483270
A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling.
Zhang, Chunxi; Lin, Tie
2016-07-28
In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method.
Avoiding drift related to linear analysis update with Lagrangian coordinate models
NASA Astrophysics Data System (ADS)
Wang, Yiguo; Counillon, Francois; Bertino, Laurent
2015-04-01
When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.
Protein structure modeling and refinement by global optimization in CASP12.
Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2018-03-01
For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.
Projecting U.S. climate forcing and criteria pollutant emissions through 2050
Presentation highlighting a method for translating emission scenarios to model-ready emission inventories. The presentation highlights new features for spatially allocating emissions to counties and grid cells and identifies areas of potential improvement, such as updating tempor...
75 FR 1003 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
... the method by which an IBHC can elect to become an SIBHC. In addition, Rule 17i-2 indicates that the... would be required to update its Notice of Intention if it changes a mathematical model used to calculate...
DOT2: Macromolecular Docking With Improved Biophysical Models
Roberts, Victoria A.; Thompson, Elaine E.; Pique, Michael E.; Perez, Martin S.; Eyck, Lynn Ten
2015-01-01
Computational docking is a useful tool for predicting macromolecular complexes, which are often difficult to determine experimentally. Here we present the DOT2 software suite, an updated version of the DOT intermolecular docking program. DOT2 provides straightforward, automated construction of improved biophysical models based on molecular coordinates, offering checkpoints that guide the user to include critical features. DOT has been updated to run more quickly, allow flexibility in grid size and spacing, and generate a complete list of favorable candidate configu-rations. Output can be filtered by experimental data and rescored by the sum of electrostatic and atomic desolvation energies. We show that this rescoring method improves the ranking of correct complexes for a wide range of macromolecular interactions, and demonstrate that biologically relevant models are essential for biologically relevant results. The flexibility and versatility of DOT2 accommodate realistic models of complex biological systems, improving the likelihood of a successful docking outcome. PMID:23695987
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Kramer, Kirsten E; Small, Gary W
2009-02-01
Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.
Identification of cracks in thick beams with a cracked beam element model
NASA Astrophysics Data System (ADS)
Hou, Chuanchuan; Lu, Yong
2016-12-01
The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo
This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Updates to the Demographic and Spatial Allocation Models to ...
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing development scenarios up to 2100. This newest version includes updated population and land use data sets and addresses limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide (Final Report) describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.
2004-01-01
This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.
Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun
2014-01-01
Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888
Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models
NASA Astrophysics Data System (ADS)
Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.
2015-12-01
There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to MIROC and adopted for CMIP6 experiment.
Li, Meng; Li, Gang; Gonenc, Berk; Duan, Xingguang; Iordachita, Iulian
2017-06-01
Accurate needle placement into soft tissue is essential to percutaneous prostate cancer diagnosis and treatment procedures. This paper discusses the steering of a 20 gauge (G) FBG-integrated needle with three sets of Fiber Bragg Grating (FBG) sensors. A fourth-order polynomial shape reconstruction method is introduced and compared with previous approaches. To control the needle, a bicycle model based navigation method is developed to provide visual guidance lines for clinicians. A real-time model updating method is proposed for needle steering inside inhomogeneous tissue. A series of experiments were performed to evaluate the proposed needle shape reconstruction, visual guidance and real-time model updating methods. Targeting experiments were performed in soft plastic phantoms and in vitro tissues with insertion depths ranging between 90 and 120 mm. Average targeting errors calculated based upon the acquired camera images were 0.40 ± 0.35 mm in homogeneous plastic phantoms, 0.61 ± 0.45 mm in multilayer plastic phantoms and 0.69 ± 0.25 mm in ex vivo tissue. Results endorse the feasibility and accuracy of the needle shape reconstruction and visual guidance methods developed in this work. The approach implemented for the multilayer phantom study could facilitate accurate needle placement efforts in real inhomogeneous tissues. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Keil, M.; Esch, T.; Feigenspan, S.; Marconcini, M.; Metz, A.; Ottinger, M.; Zeidler, J.
2015-04-01
For the update 2012 of CORINE Land Cover, in Germany a new approach has been developed in order to profit from the higher accuracies of the national topographic database. In agreement between the Federal Environment Agency (UBA) and the Federal Agency for Cartography and Geodesy (BKG), CLC2012 has been derived from an updated digital landscape model DLM-DE, which is based on the Official Topographical Cartographic Information System ATKIS of the land survey authorities. The DLM-DE 2009 created by the BKG served as the base for the update 2012 in the national and EU context, both under the responsibility of the BKG. In addition to the updated CLC2012, a second product, the layer "CLC_Change" (2006-2012) was also requested by the European Environment Agency. The objective of the project part of DLR-DFD was to contribute the primary change areas from 2006 to 2009 in the phase of method change using the refined 2009 geometry of the DLM-DE 2009 for a retrospective view back to 2006. A semiautomatic approach was developed for this task, with an important role of AWiFS time series data of 2005 / 2006 in the context of separation between grassland - arable land. Other valuable datasets for the project were already available GMES land monitoring products of 2006 like the soil sealing layer 2006. The paper describes the developed method and discusses exemplary results of the CORINE backdating project part.
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Evans, M V; Chiu, W A; Okino, M S; Caldwell, J C
2009-05-01
Trichloroethylene (TCE) is a lipophilic solvent rapidly absorbed and metabolized via oxidation and conjugation to a variety of metabolites that cause toxicity to several internal targets. Increases in liver weight (hepatomegaly) have been reported to occur quickly in rodents after TCE exposure, with liver tumor induction reported in mice after long-term exposure. An integrated dataset for gavage and inhalation TCE exposure and oral data for exposure to two of its oxidative metabolites (TCA and DCA) was used, in combination with an updated and more accurate physiologically-based pharmacokinetic (PBPK) model, to examine the question as to whether the presence of TCA in the liver is responsible for TCE-induced hepatomegaly in mice. The updated PBPK model was used to help discern the quantitative contribution of metabolites to this effect. The update of the model was based on a detailed evaluation of predictions from previously published models and additional preliminary analyses based on gas uptake inhalation data in mice. The parameters of the updated model were calibrated using Bayesian methods with an expanded pharmacokinetic database consisting of oral, inhalation, and iv studies of TCE administration as well as studies of TCE metabolites in mice. The dose-response relationships for hepatomegaly derived from the multi-study database showed that the proportionality of dose to response for TCE- and DCA-induced hepatomegaly is not observed for administered doses of TCA in the studied range. The updated PBPK model was used to make a quantitative comparison of internal dose of metabolized and administered TCA. While the internal dose of TCA predicted by modeling of TCE exposure (i.e., mg TCA/kg-d) showed a linear relationship with hepatomegaly, the slope of the relationship was much greater than that for directly administered TCA. Thus, the degree of hepatomegaly induced per unit of TCA produced through TCE oxidation is greater than that expected per unit of TCA administered directly, which is inconsistent with the hypothesis that TCA alone accounts for TCE-induced hepatomegaly. In addition, TCE-induced hepatomegaly showed a much more consistent relationship with PBPK model predictions of total oxidative metabolism than with predictions of TCE area-under-the-curve in blood, consistent with toxicity being induced by oxidative metabolites rather than the parent compound. Therefore, these results strongly suggest that oxidative metabolites in addition to TCA are necessary contributors to TCE-induced liver weight changes in mice.
NASA Astrophysics Data System (ADS)
He, Wei; Williard, Nicholas; Osterman, Michael; Pecht, Michael
A new method for state of health (SOH) and remaining useful life (RUL) estimations for lithium-ion batteries using Dempster-Shafer theory (DST) and the Bayesian Monte Carlo (BMC) method is proposed. In this work, an empirical model based on the physical degradation behavior of lithium-ion batteries is developed. Model parameters are initialized by combining sets of training data based on DST. BMC is then used to update the model parameters and predict the RUL based on available data through battery capacity monitoring. As more data become available, the accuracy of the model in predicting RUL improves. Two case studies demonstrating this approach are presented.
Updates to Selected Analytical Methods for Environmental Remediation and Recovery (SAM)
View information on the latest updates to methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), including the newest recommended methods and publications.
Methods for Improving Fine-Scale Applications of the WRF-CMAQ Modeling System
Presentation on the work in AMAD to improve fine-scale (e.g. 4km and 1km) WRF-CMAQ simulations. Includes iterative analysis, updated sea surface temperature and snow cover fields, and inclusion of impervious surface information (urban parameterization).
Status and updates to the rangeland health model
USDA-ARS?s Scientific Manuscript database
Development of the widely applied rangeland health protocol, “Interpreting Indicators of Rangeland Health” (IIRH) was stimulated by the publication of the National Research Council’s 1994 publication, Rangeland Health: New Methods to Classify, Inventory, and Monitor Rangelands. In a parallel effort,...
Wright, Stephen T; Hoy, Jennifer; Mulhall, Brian; O’Connor, Catherine C; Petoumenos, Kathy; Read, Timothy; Smith, Don; Woolley, Ian; Boyd, Mark A
2014-01-01
Background Recent studies suggest higher cumulative HIV viraemia exposure measured as viraemia copy-years (VCY) is associated with increased all-cause mortality. The objectives of this study are (a) report the association between VCY and all-cause mortality, and (b) assess associations between common patient characteristics and VCY. Methods Analyses were based on patients recruited to the Australian HIV Observational Database (AHOD) who had received ≥ 24 weeks of antiretroviral therapy (ART). We established VCY after 1, 3, 5 and 10 years of ART by calculating the area under the plasma viral load time-series. We used survival methods to determine the association between high VCY and all-cause mortality. We used multivariable mixed-effect models to determine predictors of VCY. We compared a baseline information model with a time-updated model to evaluate discrimination of patients with high VCY. Results Of the 3021 AHOD participants that initiated ART, 2073(69%), 1667(55%), 1267(42%) and 638(21%) were eligible for analysis at 1, 3, 5, 10 years of ART respectively. Multivariable adjusted hazard ratio (HR) association between all-cause mortality and high VCY was statistically significant, HR 1.52(1.09, 2.13), p-value=0.01. Predicting high VCY after one-year of ART for a time-updated model compared to a baseline information only model, the area under the sensitivity/specificity curve (AUC) was 0.92 vs. 0.84; and at 10 years of ART, AUC: 0.87 vs. 0.61 respectively. Conclusion A high cumulative measure of viral load after initiating ART is associated with increased risk of all-cause mortality. Identifying patients with high VCY is improved by incorporating time-updated information. PMID:24463783
Update on the Health Services Research Doctoral Core Competencies.
Burgess, James F; Menachemi, Nir; Maciejewski, Matthew L
2018-03-13
To present revised core competencies for doctoral programs in health services research (HSR), modalities to deliver these competencies, and suggested methods for assessing mastery of these competencies. Core competencies were originally developed in 2005, updated (but unpublished) in 2008, modestly updated for a 2016 HSR workforce conference, and revised based on feedback from attendees. Additional feedback was obtained from doctoral program directors, employer/workforce experts and attendees of presentation on these competencies at the AcademyHealth's June 2017 Annual Research Meeting. The current version (V2.1) competencies include the ethical conduct of research, conceptual models, development of research questions, study designs, data measurement and collection methods, statistical methods for analyzing data, professional collaboration, and knowledge dissemination. These competencies represent a core that defines what HSR researchers should master in order to address the complexities of microsystem to macro-system research that HSR entails. There are opportunities to conduct formal evaluation of newer delivery modalities (e.g., flipped classrooms) and to integrate new Learning Health System Researcher Core Competencies, developed by AHRQ, into the HSR core competencies. Core competencies in HSR are a continually evolving work in progress because new research questions arise, new methods are developed, and the trans-disciplinary nature of the field leads to new multidisciplinary and team building needs. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.
2018-02-01
This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...
Updated users' guide for TAWFIVE with multigrid
NASA Technical Reports Server (NTRS)
Melson, N. Duane; Streett, Craig L.
1989-01-01
A program for the Transonic Analysis of a Wing and Fuselage with Interacted Viscous Effects (TAWFIVE) was improved by the incorporation of multigrid and a method to specify lift coefficient rather than angle-of-attack. A finite volume full potential multigrid method is used to model the outer inviscid flow field. First order viscous effects are modeled by a 3-D integral boundary layer method. Both turbulent and laminar boundary layers are treated. Wake thickness effects are modeled using a 2-D strip method. A brief discussion of the engineering aspects of the program is given. The input, output, and use of the program are covered in detail. Sample results are given showing the effects of boundary layer corrections and the capability of the lift specification method.
Heuristic reusable dynamic programming: efficient updates of local sequence alignment.
Hong, Changjin; Tewfik, Ahmed H
2009-01-01
Recomputation of the previously evaluated similarity results between biological sequences becomes inevitable when researchers realize errors in their sequenced data or when the researchers have to compare nearly similar sequences, e.g., in a family of proteins. We present an efficient scheme for updating local sequence alignments with an affine gap model. In principle, using the previous matching result between two amino acid sequences, we perform a forward-backward alignment to generate heuristic searching bands which are bounded by a set of suboptimal paths. Given a correctly updated sequence, we initially predict a new score of the alignment path for each contour to select the best candidates among them. Then, we run the Smith-Waterman algorithm in this confined space. Furthermore, our heuristic alignment for an updated sequence shows that it can be further accelerated by using reusable dynamic programming (rDP), our prior work. In this study, we successfully validate "relative node tolerance bound" (RNTB) in the pruned searching space. Furthermore, we improve the computational performance by quantifying the successful RNTB tolerance probability and switch to rDP on perturbation-resilient columns only. In our searching space derived by a threshold value of 90 percent of the optimal alignment score, we find that 98.3 percent of contours contain correctly updated paths. We also find that our method consumes only 25.36 percent of the runtime cost of sparse dynamic programming (sDP) method, and to only 2.55 percent of that of a normal dynamic programming with the Smith-Waterman algorithm.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
A Novel Dynamic Update Framework for Epileptic Seizure Prediction
Wang, Minghui; Hong, Xiaojun; Han, Jie
2014-01-01
Epileptic seizure prediction is a difficult problem in clinical applications, and it has the potential to significantly improve the patients' daily lives whose seizures cannot be controlled by either drugs or surgery. However, most current studies of epileptic seizure prediction focus on high sensitivity and low false-positive rate only and lack the flexibility for a variety of epileptic seizures and patients' physical conditions. Therefore, a novel dynamic update framework for epileptic seizure prediction is proposed in this paper. In this framework, two basic sample pools are constructed and updated dynamically. Furthermore, the prediction model can be updated to be the most appropriate one for the prediction of seizures' arrival. Mahalanobis distance is introduced in this part to solve the problem of side information, measuring the distance between two data sets. In addition, a multichannel feature extraction method based on Hilbert-Huang transform and extreme learning machine is utilized to extract the features of a patient's preseizure state against the normal state. At last, a dynamic update epileptic seizure prediction system is built up. Simulations on Freiburg database show that the proposed system has a better performance than the one without update. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices. PMID:25050381
A novel dynamic update framework for epileptic seizure prediction.
Han, Min; Ge, Sunan; Wang, Minghui; Hong, Xiaojun; Han, Jie
2014-01-01
Epileptic seizure prediction is a difficult problem in clinical applications, and it has the potential to significantly improve the patients' daily lives whose seizures cannot be controlled by either drugs or surgery. However, most current studies of epileptic seizure prediction focus on high sensitivity and low false-positive rate only and lack the flexibility for a variety of epileptic seizures and patients' physical conditions. Therefore, a novel dynamic update framework for epileptic seizure prediction is proposed in this paper. In this framework, two basic sample pools are constructed and updated dynamically. Furthermore, the prediction model can be updated to be the most appropriate one for the prediction of seizures' arrival. Mahalanobis distance is introduced in this part to solve the problem of side information, measuring the distance between two data sets. In addition, a multichannel feature extraction method based on Hilbert-Huang transform and extreme learning machine is utilized to extract the features of a patient's preseizure state against the normal state. At last, a dynamic update epileptic seizure prediction system is built up. Simulations on Freiburg database show that the proposed system has a better performance than the one without update. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.
NASA Astrophysics Data System (ADS)
Pignalberi, A.; Pezzopane, M.; Rizzi, R.
2018-03-01
An empirical method to model the lower part of the ionospheric topside region from the F2 layer peak height to about 500-600 km of altitude over the European region is proposed. The method is based on electron density values recorded from December 2013 to June 2016 by Swarm satellites and on foF2 and hmF2 values provided by IRI UP (International Reference Ionosphere UPdate), which is a method developed to update the IRI model relying on the assimilation of foF2 and M(3000)F2 data routinely recorded by a network of European ionosonde stations. Topside effective scale heights are calculated by fitting some definite analytical functions (α-Chapman, β-Chapman, Epstein, and exponential) through the values recorded by Swarm and the ones output by IRI UP, with the assumption that the effective scale height is constant in the altitude range considered. Calculated effective scale heights are then modeled as a function of foF2 and hmF2, in order to be operationally applicable to both ionosonde measurements and ionospheric models, like IRI. The method produces two-dimensional grids of the median effective scale height binned as a function of foF2 and hmF2, for each of the considered topside profiles. A statistical comparison with Constellation Observing System for Meteorology, Ionosphere, and Climate/FORMOsa SATellite-3 collected Radio Occultation profiles is carried out to assess the validity of the proposed method and to investigate which of the considered topside profiles is the best one. The α-Chapman topside function displays the best performance compared to the others and also when compared to the NeQuick topside option of IRI.
Graphical user interface for intraoperative neuroimage updating
NASA Astrophysics Data System (ADS)
Rick, Kyle R.; Hartov, Alex; Roberts, David W.; Lunn, Karen E.; Sun, Hai; Paulsen, Keith D.
2003-05-01
Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.
Imitate or innovate: Competition of strategy updating attitudes in spatial social dilemma games
NASA Astrophysics Data System (ADS)
Danku, Zsuzsa; Wang, Zhen; Szolnoki, Attila
2018-01-01
Evolution is based on the assumption that competing players update their strategies to increase their individual payoffs. However, while the applied updating method can be different, most of previous works proposed uniform models where players use identical way to revise their strategies. In this work we explore how imitation-based or learning attitude and innovation-based or myopic best-response attitude compete for space in a complex model where both attitudes are available. In the absence of additional cost the best response trait practically dominates the whole snow-drift game parameter space which is in agreement with the average payoff difference of basic models. When additional cost is involved then the imitation attitude can gradually invade the whole parameter space but this transition happens in a highly nontrivial way. However, the role of competing attitudes is reversed in the stag-hunt parameter space where imitation is more successful in general. Interestingly, a four-state solution can be observed for the latter game which is a consequence of an emerging cyclic dominance between possible states. These phenomena can be understood by analyzing the microscopic invasion processes, which reveals the unequal propagation velocities of strategies and attitudes.
Sonora: A New Generation Model Atmosphere Grid for Brown Dwarfs and Young Extrasolar Giant Planets
NASA Technical Reports Server (NTRS)
Marley, Mark S.; Saumon, Didier; Fortney, Jonathan J.; Morley, Caroline; Lupu, Roxana Elena; Freedman, Richard; Visscher, Channon
2017-01-01
Brown dwarf and giant planet atmospheric structure and composition has been studied both by forward models and, increasingly so, by retrieval methods. While indisputably informative, retrieval methods are of greatest value when judged in the context of grid model predictions. Meanwhile retrieval models can test the assumptions inherent in the forward modeling procedure. In order to provide a new, systematic survey of brown dwarf atmospheric structure, emergent spectra, and evolution, we have constructed a new grid of brown dwarf model atmospheres. We ultimately aim for our grid to span substantial ranges of atmospheric metallilcity, C/O ratios, cloud properties, atmospheric mixing, and other parameters. Spectra predicted by our modeling grid can be compared to both observations and retrieval results to aid in the interpretation and planning of future telescopic observations. We thus present Sonora, a new generation of substellar atmosphere models, appropriate for application to studies of L, T, and Y-type brown dwarfs and young extrasolar giant planets. The models describe the expected temperature-pressure profile and emergent spectra of an atmosphere in radiative-convective equilibrium for ranges of effective temperatures and gravities encompassing 200 less than or equal to T(sub eff) less than or equal to 2400 K and 2.5 less than or equal to log g less than or equal to 5.5. In our poster we briefly describe our modeling methodology, enumerate various updates since our group's previous models, and present our initial tranche of models for cloudless, solar metallicity, and solar carbon-to-oxygen ratio, chemical equilibrium atmospheres. These models will be available online and will be updated as opacities and cloud modeling methods continue to improve.
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui
2009-01-01
The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
NASA Astrophysics Data System (ADS)
Secan, James A.
1991-05-01
Modern military communication, navigation, and surveillance systems depend on reliable, noise-free transionospheric radio-frequency channels. They can be severely impacted by small-scale electron-density irregularities in the ionosphere, which cause both phase and amplitude scintillation. Basic tools used in planning and mitigation schemes are climatological in nature and thus may greatly over- and under-estimate the effects of scintillation in a given scenario. This report summarizes the results of the first year of a three-year investigation into the methods for updating ionospheric scintillation models using observations of ionospheric plasma-density irregularities measured by DMSP Scintillation Meter (SM) sensor. Results are reported from the analysis of data from a campaign conducted in January 1990 near Tromso, Norway, in which near coincident in-situ plasma-density and transionospheric scintillation measurements were made. Estimates for the level of intensity and phase scintillation on a transionospheric UHF radio link in the early-evening auroral zone were calculated from DMSP SM data and compared to the levels actually observed.
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
Adapting to change: The role of the right hemisphere in mental model building and updating.
Filipowicz, Alex; Anderson, Britt; Danckert, James
2016-09-01
We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multi-level damage identification with response reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chao-Dong; Xu, You-Lin
2017-10-01
Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Perera, Ricardo; De Roeck, Guido
2008-06-01
This paper develops a sensitivity-based updating method to identify the damage in a tested reinforced concrete (RC) frame modeled with a two-dimensional planar finite element (FE) by minimizing the discrepancies of modal frequencies and mode shapes. In order to reduce the number of unknown variables, a bidimensional damage (element) function is proposed, resulting in a considerable improvement of the optimization performance. For damage identification, a reference FE model of the undamaged frame divided into a few damage functions is firstly obtained and then a rough identification is carried out to detect possible damage locations, which are subsequently refined with new damage functions to accurately identify the damage. From a design point of view, it would be useful to evaluate, in a simplified way, the remaining bending stiffness of cracked beam sections or segments. Hence, an RC damage model based on a static mechanism is proposed to estimate the remnant stiffness of a cracked RC beam segment. The damage model is based on the assumption that the damage effect spreads over a region and the stiffness in the segment changes linearly. Furthermore, the stiffness reduction evaluated using this damage model is compared with the FE updating result. It is shown that the proposed bidimensional damage function is useful in producing a well-conditioned optimization problem and the aforementioned damage model can be used for an approximate stiffness estimation of a cracked beam segment.
Modelling Solar Energetic Particle Events Using the iPATH Model
NASA Astrophysics Data System (ADS)
Li, G.; Hu, J.; Ao, X.; Zank, G. P.; Verkhoglyadova, O. P.
2016-12-01
Solar Energetic Particles (SEPs) is the No. 1 space weather hazard. Understanding how particles are energized and propagated in these events is of practical concerns to the manned space missions. In particular, both the radial evolution and the longitudinal extent of a gradual solarenergetic particle (SEP) event are central topics for space weather forecasting. In this talk, I discuss the improved Particle Acceleration and Transport in the Heliosphere (iPATH) model. The iPATH model consists of three parts: (1) an updated ZEUS3D V3.5 MHD module that models thebackground solar wind and the initiation of a CME in a 2D domain; (2) an updated shock acceleration module where we investigate particle acceleration at different longitudinal locations along the surface of a CME-driven shock. Accelerated particle spectrum are obtained at the shock under the diffusive shock acceleration mechanism. Shock parameters and particle distributions are recorded and used as inputs for the later part. (3) an updated transport module where we follow the transport of accelerated particles from the shock to any destinations (Earth and/or Mars, e.g.) using a Monte-Carlo method. Both pitch angle scattering due to MHD turbulence and perpendicular diffusion across magnetic field are included. Our iPATH model is therefore intrinsically 2D in nature. The model is capable of generating time intensity profiles and instantaneous particle spectra atvarious locations and can greatly improve our current space weather forecasting capability.
Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery
NASA Astrophysics Data System (ADS)
Qin, Rongjun
2014-10-01
Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.
Ground Operations of the ISS GNC Babb-Mueller Atmospheric Density Model
NASA Technical Reports Server (NTRS)
Brogan, Jonathan
2002-01-01
The ISS GNC system was updated recently with a new software release that provides onboard state determination capability. Prior to this release, only the Russian segment maintained and propagated the onboard state, which was periodically updated through Russian ground tracking. The new software gives the US segment the capability for maintaining the onboard state, and includes new GPS and state vector propagation capabilities. Part of this software package is an atmospheric density model based on the Babb-Mueller algorithm. Babb-Mueller efficiently mimics a full analytical density model, such as the Jacchia model. While lacchia is very robust and is used in the Mission Control Center, it is too computationally intensive for use onboard. Thus, Babb-Mueller was chosen as an alternative. The onboard model depends on a set of calibration coefficients that produce a curve fit to the lacchia model. The ISS GNC system only maintains one set of coefficients onboard, so a new set must be uplinked by controllers when the atmospheric conditions change. The onboard density model provides a real-time density value, which is used to calculate the drag experienced by the ISS. This drag value is then incorporated into the onboard propagation of the state vector. The propagation of the state vector, and therefore operation of the BabbMueller algorithm, will be most critical when GPS updates and secondary state vector sources fail. When GPS is active, the onboard state vector will be updated every ten seconds, so the propagation error is irrelevant. When GPS is inactive, the state vector must be updated at least every 24 hours, based on current protocol. Therefore, the Babb-Mueller coefficients must be accurate enough to fulfill the state vector accuracy requirements for at least one day. A ground operations concept was needed in order to manage both the on board Babb-Mueller density model and the onboard state quality. The Babb-Mueller coefficients can be determined operationally in two ways. The first method is to calibrate the coefficients in real-time, where a set of custom coefficients is generated for the real-time atmospheric conditions. The second approach is to generate pre-canned sets of coefficients that encompass the expected atmospheric conditions over the lifetime of the vehicle. These predetermined sets are known as occurrences. Even though a particular occurrence will not match the true atmospheric conditions, the error will be constrained by limiting the breadth of each occurrence. Both methods were investigated and the advantages and disadvantages of each were considered. The choice between these implementations was a trade-off between the additional accuracy of the real-time calibration and the simpler development for the approach using occurrences. The operations concept for the frequency of updates was also explored, and depends on the deviation in solar flux that still achieves the necessary accuracy of the coefficients. This was determined based on historical solar flux trends. This analysis resulted in an accurate and reliable implementation of the Babb-Mueller coefficients and how flight controllers use them during realtime operations.
NASA Technical Reports Server (NTRS)
Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian
2008-01-01
The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Seluge++: A Secure Over-the-Air Programming Scheme in Wireless Sensor Networks
Doroodgar, Farzan; Razzaque, Mohammad Abdur; Isnin, Ismail Fauzi
2014-01-01
Over-the-air dissemination of code updates in wireless sensor networks have been researchers' point of interest in the last few years, and, more importantly, security challenges toward the remote propagation of code updating have occupied the majority of efforts in this context. Many security models have been proposed to establish a balance between the energy consumption and security strength, having their concentration on the constrained nature of wireless sensor network (WSN) nodes. For authentication purposes, most of them have used a Merkle hash tree to avoid using multiple public cryptography operations. These models mostly have assumed an environment in which security has to be at a standard level. Therefore, they have not investigated the tree structure for mission-critical situations in which security has to be at the maximum possible level (e.g., military applications, healthcare). Considering this, we investigate existing security models used in over-the-air dissemination of code updates for possible vulnerabilities, and then, we provide a set of countermeasures, correspondingly named Security Model Requirements. Based on the investigation, we concentrate on Seluge, one of the existing over-the-air programming schemes, and we propose an improved version of it, named Seluge++, which complies with the Security Model Requirements and replaces the use of the inefficient Merkle tree with a novel method. Analytical and simulation results show the improvements in Seluge++ compared to Seluge. PMID:24618781
Seluge++: a secure over-the-air programming scheme in wireless sensor networks.
Doroodgar, Farzan; Abdur Razzaque, Mohammad; Isnin, Ismail Fauzi
2014-03-11
Over-the-air dissemination of code updates in wireless sensor networks have been researchers' point of interest in the last few years, and, more importantly, security challenges toward the remote propagation of code updating have occupied the majority of efforts in this context. Many security models have been proposed to establish a balance between the energy consumption and security strength, having their concentration on the constrained nature of wireless sensor network (WSN) nodes. For authentication purposes, most of them have used a Merkle hash tree to avoid using multiple public cryptography operations. These models mostly have assumed an environment in which security has to be at a standard level. Therefore, they have not investigated the tree structure for mission-critical situations in which security has to be at the maximum possible level (e.g., military applications, healthcare). Considering this, we investigate existing security models used in over-the-air dissemination of code updates for possible vulnerabilities, and then, we provide a set of countermeasures, correspondingly named Security Model Requirements. Based on the investigation, we concentrate on Seluge, one of the existing over-the-air programming schemes, and we propose an improved version of it, named Seluge++, which complies with the Security Model Requirements and replaces the use of the inefficient Merkle tree with a novel method. Analytical and simulation results show the improvements in Seluge++ compared to Seluge.
Artificial Boundary Conditions for Finite Element Model Update and Damage Detection
2017-03-01
BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H where V ~ WH. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this paper, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse Gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness-of-fit on data. Our methods are demonstrated using experimental data from electromyography studies as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise. PMID:24684448
A review and update of the Virginia Department of Transportation cash flow forecasting model.
DOT National Transportation Integrated Search
1996-01-01
This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...
Dodhia, Kejal; Stoll, Thomas; Hastie, Marcus; Furuki, Eiko; Ellwood, Simon R.; Williams, Angela H.; Tan, Yew-Foon; Testa, Alison C.; Gorman, Jeffrey J.; Oliver, Richard P.
2016-01-01
Parastagonospora nodorum, the causal agent of Septoria nodorum blotch (SNB), is an economically important pathogen of wheat (Triticum spp.), and a model for the study of necrotrophic pathology and genome evolution. The reference P. nodorum strain SN15 was the first Dothideomycete with a published genome sequence, and has been used as the basis for comparison within and between species. Here we present an updated reference genome assembly with corrections of SNP and indel errors in the underlying genome assembly from deep resequencing data as well as extensive manual annotation of gene models using transcriptomic and proteomic sources of evidence (https://github.com/robsyme/Parastagonospora_nodorum_SN15). The updated assembly and annotation includes 8,366 genes with modified protein sequence and 866 new genes. This study shows the benefits of using a wide variety of experimental methods allied to expert curation to generate a reliable set of gene models. PMID:26840125
Treacher Collins syndrome: New insights from animal models.
Tse, William Ka Fai
2016-12-01
Treacher Collins syndrome (TCS, OMIM: 154500), an autosomal-dominant craniofacial developmental syndrome that occurs in 1 out of every 50,000 live births, is characterized by craniofacial malformation. Mutations in TCOF1, POLR1C, or POLR1D have been identified in affected individuals. In addition to established mouse models, zebrafish models have recently emerged as an valuable method to study facial disease. In this report, we summarized the two updated articles working on the pathogenesis of the newly identified polr1c and polr1d TCS mutations (Lau et al., 2016; Noack Watt et al., 2016) and discussed the possibility of using the anti-oxidants to prevent or rescue the TCS facial phenotype (Sakai et al., 2016). Taken together, this article provides an update on the disease from basic information to pathogenesis, and further summarizes the suggested therapies from recent laboratory research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
NASA Astrophysics Data System (ADS)
Ren, Qianci
2018-04-01
Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.
Prospective testing of Coulomb short-term earthquake forecasts
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.
2009-12-01
Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of distance, time, and magnitude is needed. Third, earthquake catalogs contain errors in location and magnitude that may be corrected in later editions. One solution is to test models in “pseudo-prospective” mode (after catalog revision but without model adjustment). Again, appropriate for science but not for response. Hopefully, demonstrations of modeling success will stimulate improvements in earthquake detection.
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagwell, L.; Bennett, P.; Flach, G.
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
A Multi-Index Integrated Change detection method for updating the National Land Cover Database
Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.
2010-01-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
Technical Update for Vocational Agriculture Teachers in Secondary Schools. Final Report.
ERIC Educational Resources Information Center
Iowa State Univ. of Science and Technology, Ames. Dept. of Agricultural Education.
A project provided ongoing opportunities for teachers in Iowa to upgrade their expertise in agribusiness management using new technology; production, processing, and marketing agricultural products; biotechnology in agriculture; and conservation of natural resources. The project also modeled effective teaching methods and strategies. Project…
Face Adaptation and Attractiveness Aftereffects in 8-Year-Olds and Adults
ERIC Educational Resources Information Center
Anzures, Gizelle; Mondloch, Catherine J.; Lackner, Christine
2009-01-01
A novel method was used to investigate developmental changes in face processing: attractiveness aftereffects. Consistent with the norm-based coding model, viewing consistently distorted faces shifts adults' attractiveness preferences toward the adapting stimuli. Thus, adults' attractiveness judgments are influenced by a continuously updated face…
MOS 2.0: Modeling the Next Revolutionary Mission Operations System
NASA Technical Reports Server (NTRS)
Delp, Christopher L.; Bindschadler, Duane; Wollaeger, Ryan; Carrion, Carlos; McCullar, Michelle; Jackson, Maddalena; Sarrel, Marc; Anderson, Louise; Lam, Doris
2011-01-01
Designed and implemented in the 1980's, the Advanced Multi-Mission Operations System (AMMOS) was a breakthrough for deep-space NASA missions, enabling significant reductions in the cost and risk of implementing ground systems. By designing a framework for use across multiple missions and adaptability to specific mission needs, AMMOS developers created a set of applications that have operated dozens of deep-space robotic missions over the past 30 years. We seek to leverage advances in technology and practice of architecting and systems engineering, using model-based approaches to update the AMMOS. We therefore revisit fundamental aspects of the AMMOS, resulting in a major update to the Mission Operations System (MOS): MOS 2.0. This update will ensure that the MOS can support an increasing range of mission types, (such as orbiters, landers, rovers, penetrators and balloons), and that the operations systems for deep-space robotic missions can reap the benefits of an iterative multi-mission framework.12 This paper reports on the first phase of this major update. Here we describe the methods and formal semantics used to address MOS 2.0 architecture and some early results. Early benefits of this approach include improved stakeholder input and buy-in, the ability to articulate and focus effort on key, system-wide principles, and efficiency gains obtained by use of well-architected design patterns and the use of models to improve the quality of documentation and decrease the effort required to produce and maintain it. We find that such methods facilitate reasoning, simulation, analysis on the system design in terms of design impacts, generation of products (e.g., project-review and software-delivery products), and use of formal process descriptions to enable goal-based operations. This initial phase yields a forward-looking and principled MOS 2.0 architectural vision, which considers both the mission-specific context and long-term system sustainability.
Test and analysis procedures for updating math models of Space Shuttle payloads
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1991-01-01
Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Why, when and how to update a meta-ethnography qualitative synthesis.
France, Emma F; Wells, Mary; Lang, Heidi; Williams, Brian
2016-03-15
Meta-ethnography is a unique, systematic, qualitative synthesis approach widely used to provide robust evidence on patient and clinician beliefs and experiences and understandings of complex social phenomena. It can make important theoretical and conceptual contributions to health care policy and practice. Since beliefs, experiences, health care contexts and social phenomena change over time, the continued relevance of the findings from meta-ethnographies cannot be assumed. However, there is little guidance on whether, when and how meta-ethnographies should be updated; Cochrane guidance on updating reviews of intervention effectiveness is unlikely to be fully appropriate. This is the first in-depth discussion on updating a meta-ethnography; it explores why, when and how to update a meta-ethnography. Three main methods of updating the analysis and synthesis are examined. Advantages and disadvantages of each method are outlined, relating to the context, purpose, process and output of the update and the nature of the new data available. Recommendations are made for the appropriate use of each method, and a worked example of updating a meta-ethnography is provided. This article makes a unique contribution to this evolving area of meta-ethnography methodology.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Updates to the Demographic and Spatial Allocation Models to ...
EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
MICKENAUTSCH, Steffen; YENGOPAL, Veerasamy
2013-01-01
Objective To demonstrate the application of the modified Ottawa method by establishing the update need of a systematic review with focus on the caries preventive effect of GIC versus resin pit and fissure sealants; to answer the question as to whether the existing conclusions of this systematic review are still current; to establish whether a new update of this systematic review was needed. Methods: Application of the Modified Ottawa method. Application date: April/May 2012. Results Four signals aligned with the criteria of the modified Ottawa method were identified. The content of these signals suggest that higher precision of the current systematic review results might be achieved if an update of the current review were conducted at this point in time. However, these signals further indicate that such systematic review update, despite its higher precision, would only confirm the existing review conclusion that no statistically significant difference exists in the caries-preventive effect of GIC and resin-based fissure sealants. Conclusion In conclusion, this study demonstrated the modified Ottawa method as an effective tool in establishing the update need of the systematic review. In addition, it was established that the conclusions of the systematic review in relation to the caries preventive effect of GIC versus resin based fissure sealants are still current, and that no update of this systematic review was warranted at date of application. PMID:24212996
NASA Astrophysics Data System (ADS)
Hu, Rong-Pan; Xu, You-Lin; Zhan, Sheng
2018-01-01
Estimation of lateral displacement and acceleration responses is essential to assess safety and serviceability of high-rise buildings under dynamic loadings including earthquake excitations. However, the measurement information from the limited number of sensors installed in a building structure is often insufficient for the complete structural performance assessment. An integrated multi-type sensor placement and response reconstruction method has thus been proposed by the authors to tackle this problem. To validate the feasibility and effectiveness of the proposed method, an experimental investigation using a cantilever beam with multi-type sensors is performed and reported in this paper. The experimental setup is first introduced. The finite element modelling and model updating of the cantilever beam are then performed. The optimal sensor placement for the best response reconstruction is determined by the proposed method based on the updated FE model of the beam. After the sensors are installed on the physical cantilever beam, a number of experiments are carried out. The responses at key locations are reconstructed and compared with the measured ones. The reconstructed responses achieve a good match with the measured ones, manifesting the feasibility and effectiveness of the proposed method. Besides, the proposed method is also examined for the cases of different excitations and unknown excitation, and the results prove the proposed method to be robust and effective. The superiority of the optimized sensor placement scheme is finally demonstrated through comparison with two other different sensor placement schemes: the accelerometer-only scheme and non-optimal sensor placement scheme. The proposed method can be applied to high-rise buildings for seismic performance assessment.
ERIC Educational Resources Information Center
Isenberg, Eric; Hock, Heinrich
2011-01-01
This report presents the value-added models that will be used to measure school and teacher effectiveness in the District of Columbia Public Schools (DCPS) in the 2010-2011 school year. It updates the earlier technical report, "Measuring Value Added for IMPACT and TEAM in DC Public Schools." The earlier report described the methods used…
Dissociable effects of surprise and model update in parietal and anterior cingulate cortex
O’Reilly, Jill X.; Schüffelgen, Urs; Cuell, Steven F.; Behrens, Timothy E. J.; Mars, Rogier B.; Rushworth, Matthew F. S.
2013-01-01
Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback–Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. PMID:23986499
Chemical transport model simulations of organic aerosol in ...
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Evaluating Methods of Updating Training Data in Long-Term Genomewide Selection
Neyhart, Jeffrey L.; Tiede, Tyler; Lorenz, Aaron J.; Smith, Kevin P.
2017-01-01
Genomewide selection is hailed for its ability to facilitate greater genetic gains per unit time. Over breeding cycles, the requisite linkage disequilibrium (LD) between quantitative trait loci and markers is expected to change as a result of recombination, selection, and drift, leading to a decay in prediction accuracy. Previous research has identified the need to update the training population using data that may capture new LD generated over breeding cycles; however, optimal methods of updating have not been explored. In a barley (Hordeum vulgare L.) breeding simulation experiment, we examined prediction accuracy and response to selection when updating the training population each cycle with the best predicted lines, the worst predicted lines, both the best and worst predicted lines, random lines, criterion-selected lines, or no lines. In the short term, we found that updating with the best predicted lines or the best and worst predicted lines resulted in high prediction accuracy and genetic gain, but in the long term, all methods (besides not updating) performed similarly. We also examined the impact of including all data in the training population or only the most recent data. Though patterns among update methods were similar, using a smaller but more recent training population provided a slight advantage in prediction accuracy and genetic gain. In an actual breeding program, a breeder might desire to gather phenotypic data on lines predicted to be the best, perhaps to evaluate possible cultivars. Therefore, our results suggest that an optimal method of updating the training population is also very practical. PMID:28315831
Image updating for brain deformation compensation in tumor resection
NASA Astrophysics Data System (ADS)
Fan, Xiaoyao; Ji, Songbai; Olson, Jonathan D.; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2016-03-01
Preoperative magnetic resonance images (pMR) are typically used for intraoperative guidance in image-guided neurosurgery, the accuracy of which can be significantly compromised by brain deformation. Biomechanical finite element models (FEM) have been developed to estimate whole-brain deformation and produce model-updated MR (uMR) that compensates for brain deformation at different surgical stages. Early stages of surgery, such as after craniotomy and after dural opening, have been well studied, whereas later stages after tumor resection begins remain challenging. In this paper, we present a method to simulate tumor resection by incorporating data from intraoperative stereovision (iSV). The amount of tissue resection was estimated from iSV using a "trial-and-error" approach, and the cortical shift was measured from iSV through a surface registration method using projected images and an optical flow (OF) motion tracking algorithm. The measured displacements were employed to drive the biomechanical brain deformation model, and the estimated whole-brain deformation was subsequently used to deform pMR and produce uMR. We illustrate the method using one patient example. The results show that the uMR aligned well with iSV and the overall misfit between model estimates and measured displacements was 1.46 mm. The overall computational time was ~5 min, including iSV image acquisition after resection, surface registration, modeling, and image warping, with minimal interruption to the surgical flow. Furthermore, we compare uMR against intraoperative MR (iMR) that was acquired following iSV acquisition.
The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...
Valence-Dependent Belief Updating: Computational Validation
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499
Valence-Dependent Belief Updating: Computational Validation.
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.
Attentional focus affects how events are segmented and updated in narrative reading.
Bailey, Heather R; Kurby, Christopher A; Sargent, Jesse Q; Zacks, Jeffrey M
2017-08-01
Readers generate situation models representing described events, but the nature of these representations may differ depending on the reading goals. We assessed whether instructions to pay attention to different situational dimensions affect how individuals structure their situation models (Exp. 1) and how they update these models when situations change (Exp. 2). In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space. Sentences containing character or spatial-location changes were perceived as event boundaries-particularly if the reader was oriented to characters or space, respectively. In Experiment 2, participants read narratives and responded to recognition probes throughout the texts. Readers who were oriented to the spatial dimension were more likely to update their situation models at spatial changes; all readers tracked the character dimension. The results from both experiments indicated that attention to individual situational dimensions influences how readers segment and update their situation models. More broadly, the results provide evidence for a global situation model updating mechanism that serves to set up new models at important narrative changes.
A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TRIANGULATED SURFACES*
Fu, Zhisong; Jeong, Won-Ki; Pan, Yongsheng; Kirby, Robert M.; Whitaker, Ross T.
2012-01-01
This paper presents an efficient, fine-grained parallel algorithm for solving the Eikonal equation on triangular meshes. The Eikonal equation, and the broader class of Hamilton–Jacobi equations to which it belongs, have a wide range of applications from geometric optics and seismology to biological modeling and analysis of geometry and images. The ability to solve such equations accurately and efficiently provides new capabilities for exploring and visualizing parameter spaces and for solving inverse problems that rely on such equations in the forward model. Efficient solvers on state-of-the-art, parallel architectures require new algorithms that are not, in many cases, optimal, but are better suited to synchronous updates of the solution. In previous work [W. K. Jeong and R. T. Whitaker, SIAM J. Sci. Comput., 30 (2008), pp. 2512–2534], the authors proposed the fast iterative method (FIM) to efficiently solve the Eikonal equation on regular grids. In this paper we extend the fast iterative method to solve Eikonal equations efficiently on triangulated domains on the CPU and on parallel architectures, including graphics processors. We propose a new local update scheme that provides solutions of first-order accuracy for both architectures. We also propose a novel triangle-based update scheme and its corresponding data structure for efficient irregular data mapping to parallel single-instruction multiple-data (SIMD) processors. We provide detailed descriptions of the implementations on a single CPU, a multicore CPU with shared memory, and SIMD architectures with comparative results against state-of-the-art Eikonal solvers. PMID:22641200
The National Health Educator Job Analysis 2010: Process and Outcomes
ERIC Educational Resources Information Center
Doyle, Eva I.; Caro, Carla M.; Lysoby, Linda; Auld, M. Elaine; Smith, Becky J.; Muenzen, Patricia M.
2012-01-01
The National Health Educator Job Analysis 2010 was conducted to update the competencies model for entry- and advanced-level health educators. Qualitative and quantitative methods were used. Structured interviews, focus groups, and a modified Delphi technique were implemented to engage 59 health educators from diverse work settings and experience…
In our previous research, we showed that robust Bayesian methods can be used in environmental modeling to define a set of probability distributions for key parameters that captures the effects of expert disagreement, ambiguity, or ignorance. This entire set can then be update...
NASA Astrophysics Data System (ADS)
Lisniak, D.; Meissner, D.; Klein, B.; Pinzinger, R.
2013-12-01
The German Federal Institute of Hydrology (BfG) offers navigational water-level forecasting services on the Federal Waterways, like the rivers Rhine and Danube. In cooperation with the Federal States this mandate also includes the forecasting of flood events. For the River Rhine, the most frequented inland waterway in Central Europe, the BfG employs a hydrological model (HBV) coupled to a hydraulic model (SOBEK) by the FEWS-framework to perform daily forecasts of water-levels operationally. Sensitivity studies have shown that the state of soil water storage in the hydrological model is a major factor of uncertainty when performing short- to medium-range forecasts some days ahead. Taking into account the various additional sources of uncertainty associated with hydrological modeling, including measurement uncertainties, it is essential to estimate an optimal initial state of the soil water storage before propagating it in time, forced by meteorological forecasts, and transforming it into discharge. We show, that using the Ensemble Kalman Filter these initial states can be updated straightforward under certain hydrologic conditions. However, this approach is not sufficient if the runoff is mainly generated by snow melt. Since the snow cover evolution is modeled rather poorly by the HBV-model in our operational setting, flood events caused by snow melt are consistently underestimated by the HBV-model, which has long term effects in basins characterized by a nival runoff regime. Thus, it appears beneficial to update the snow storage of the HBV-model with information derived from regionalized snow cover observations. We present a method to incorporate spatially distributed snow cover observations into the lumped HBV-model. We show the plausibility of this approach and asses the benefits of a coupled snow cover and soil water storage updating, which combine a direct insertion with an Ensemble Kalman Filter. The Ensemble Kalman Filter used here takes into account the internal routing mechanism of the HBV-model, which causes a delayed response of the simulated discharge at the catchment outlet to changes in internal states.
Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.
2014-01-01
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg
2014-08-15
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
NASA Astrophysics Data System (ADS)
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Mohd, Nuruol Syuhadaa; Deo, Ravinesh C.; El-Shafie, Ahmed
2017-10-01
Existing forecast models applied for reservoir inflow forecasting encounter several drawbacks, due to the difficulty of the underlying mathematical procedures being to cope with and to mimic the naturalization and stochasticity of the inflow data patterns. In this study, appropriate adjustments to the conventional coactive neuro-fuzzy inference system (CANFIS) method are proposed to improve the mathematical procedure, thus enabling a better detection of the high nonlinearity patterns found in the reservoir inflow training data. This modification includes the updating of the back propagation algorithm, leading to a consequent update of the membership rules and the induction of the centre-weighted set rather than the global weighted set used in feature extraction. The modification also aids in constructing an integrated model that is able to not only detect the nonlinearity in the training data but also the wide range of features within the training data records used to simulate the forecasting model. To demonstrate the model's efficacy, the proposed CANFIS method has been applied to forecast monthly inflow data at Aswan High Dam (AHD), located in southern Egypt. Comparative analyses of the forecasting skill of the modified CANFIS and the conventional ANFIS model are carried out with statistical score indicators to assess the reliability of the developed method. The statistical metrics support the better performance of the developed CANFIS model, which significantly outperforms the ANFIS model to attain a low relative error value (23%), mean absolute error (1.4 BCM month-1), root mean square error (1.14 BCM month-1), and a relative large coefficient of determination (0.94). The present study ascertains the better utility of the modified CANFIS model in respect to the traditional ANFIS model applied in reservoir inflow forecasting for a semi-arid region.
Bayesian Approaches for Model and Multi-mission Satellites Data Fusion
NASA Astrophysics Data System (ADS)
Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.
2017-12-01
Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl
2008-01-01
The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
...: EPA Method Development Update on Drinking Water Testing Methods for Contaminant Candidate List... Division will describe methods currently in development for many CCL contaminants, with an expectation that several of these methods will support future cycles of the Unregulated Contaminant Monitoring Rule (UCMR...
NEAMS update quarterly report for January - March 2012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.S.; Hayes, S.; Pointer, D.
Quarterly highlights are: (1) The integration of Denovo and AMP was demonstrated in an AMP simulation of the thermo-mechanics of a complete fuel assembly; (2) Bison was enhanced with a mechanistic fuel cracking model; (3) Mechanistic algorithms were incorporated into various lower-length-scale models to represent fission gases and dislocations in UO2 fuels; (4) Marmot was improved to allow faster testing of mesoscale models using larger problem domains; (5) Component models of reactor piping were developed for use in Relap-7; (6) The mesh generator of Proteus was updated to accept a mesh specification from Moose and equations were formulated for themore » intermediate-fidelity Proteus-2D1D module; (7) A new pressure solver was implemented in Nek5000 and demonstrated to work 2.5 times faster than the previous solver; (8) Work continued on volume-holdup models for two fuel reprocessing operations: voloxidation and dissolution; (9) Progress was made on a pyroprocessing model and the characterization of pyroprocessing emission signatures; (10) A new 1D groundwater waste transport code was delivered to the used fuel disposition (UFD) campaign; (11) Efforts on waste form modeling included empirical simulation of sodium-borosilicate glass compositions; (12) The Waste team developed three prototypes for modeling hydride reorientation in fuel cladding during very long-term fuel storage; (13) A benchmark demonstration problem (fission gas bubble growth) was modeled to evaluate the capabilities of different meso-scale numerical methods; (14) Work continued on a hierarchical up-scaling framework to model structural materials by directly coupling dislocation dynamics and crystal plasticity; (15) New 'importance sampling' methods were developed and demonstrated to reduce the computational cost of rare-event inference; (16) The survey and evaluation of existing data and knowledge bases was updated for NE-KAMS; (17) The NEAMS Early User Program was launched; (18) The Nuclear Regulatory Commission (NRC) Office of Regulatory Research was introduced to the NEAMS program; (19) The NEAMS overall software quality assurance plan (SQAP) was revised to version 1.5; and (20) Work continued on NiCE and its plug-ins and other utilities, such as Cubit and VisIt.« less
Predicting the evolution of complex networks via similarity dynamics
NASA Astrophysics Data System (ADS)
Wu, Tao; Chen, Leiting; Zhong, Linfeng; Xian, Xingping
2017-01-01
Almost all real-world networks are subject to constant evolution, and plenty of them have been investigated empirically to uncover the underlying evolution mechanism. However, the evolution prediction of dynamic networks still remains a challenging problem. The crux of this matter is to estimate the future network links of dynamic networks. This paper studies the evolution prediction of dynamic networks with link prediction paradigm. To estimate the likelihood of the existence of links more accurate, an effective and robust similarity index is presented by exploiting network structure adaptively. Moreover, most of the existing link prediction methods do not make a clear distinction between future links and missing links. In order to predict the future links, the networks are regarded as dynamic systems in this paper, and a similarity updating method, spatial-temporal position drift model, is developed to simulate the evolutionary dynamics of node similarity. Then the updated similarities are used as input information for the future links' likelihood estimation. Extensive experiments on real-world networks suggest that the proposed similarity index performs better than baseline methods and the position drift model performs well for evolution prediction in real-world evolving networks.
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
NASA Technical Reports Server (NTRS)
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
Exploration of cellular reaction systems.
Kirkilionis, Markus
2010-01-01
We discuss and review different ways to map cellular components and their temporal interaction with other such components to different non-spatially explicit mathematical models. The essential choices made in the literature are between discrete and continuous state spaces, between rule and event-based state updates and between deterministic and stochastic series of such updates. The temporal modelling of cellular regulatory networks (dynamic network theory) is compared with static network approaches in two first introductory sections on general network modelling. We concentrate next on deterministic rate-based dynamic regulatory networks and their derivation. In the derivation, we include methods from multiscale analysis and also look at structured large particles, here called macromolecular machines. It is clear that mass-action systems and their derivatives, i.e. networks based on enzyme kinetics, play the most dominant role in the literature. The tools to analyse cellular reaction networks are without doubt most complete for mass-action systems. We devote a long section at the end of the review to make a comprehensive review of related tools and mathematical methods. The emphasis is to show how cellular reaction networks can be analysed with the help of different associated graphs and the dissection into modules, i.e. sub-networks.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.
Liu, Zhiyuan; Wang, Changhui
2015-10-23
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.
Optimal updating magnitude in adaptive flat-distribution sampling
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery
2017-11-01
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
A review of virtual cutting methods and technology in deformable objects.
Wang, Monan; Ma, Yuzheng
2018-06-05
Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
Leggett, Richard W.
2017-03-02
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leggett, Richard W.
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
Assimilation of CryoSat-2 altimetry to a hydrodynamic model of the Brahmaputra river
NASA Astrophysics Data System (ADS)
Schneider, Raphael; Nygaard Godiksen, Peter; Ridler, Marc-Etienne; Madsen, Henrik; Bauer-Gottwein, Peter
2016-04-01
Remote sensing provides valuable data for parameterization and updating of hydrological models, for example water level measurements of inland water bodies from satellite radar altimeters. Satellite altimetry data from repeat-orbit missions such as Envisat, ERS or Jason has been used in many studies, also synthetic wide-swath altimetry data as expected from the SWOT mission. This study is one of the first hydrologic applications of altimetry data from a drifting orbit satellite mission, namely CryoSat-2. CryoSat-2 is equipped with the SIRAL instrument, a new type of radar altimeter similar to SRAL on Sentinel-3. CryoSat-2 SARIn level 2 data is used to improve a 1D hydrodynamic model of the Brahmaputra river basin in South Asia set up in the DHI MIKE 11 software. CryoSat-2 water levels were extracted over river masks derived from Landsat imagery. After discharge calibration, simulated water levels were fitted to the CryoSat-2 data along the Assam valley by adapting cross section shapes and datums. The resulting hydrodynamic model shows accurate spatio-temporal representation of water levels, which is a prerequisite for real-time model updating by assimilation of CryoSat-2 altimetry or multi-mission data in general. For this task, a data assimilation framework has been developed and linked with the MIKE 11 model. It is a flexible framework that can assimilate water level data which are arbitrarily distributed in time and space. Different types of error models, data assimilation methods, etc. can easily be used and tested. Furthermore, it is not only possible to update the water level of the hydrodynamic model, but also the states of the rainfall-runoff models providing the forcing of the hydrodynamic model. The setup has been used to assimilate CryoSat-2 observations over the Assam valley for the years 2010 to 2013. Different data assimilation methods and localizations were tested, together with different model error representations. Furthermore, the impact of different filtering and clustering methods and error descriptions of the CryoSat-2 observations was evaluated. Performance improvement in terms of discharge and water level forecast due to the assimilation of satellite altimetry data was then evaluated. The model forecasts were also compared to climatology and persistence forecasts. Using ensemble based filters, the evaluation was done not only based on performance criteria for the central forecast such as root-mean-square error (RMSE) and Nash-Sutcliffe model efficiency (NSE), but also based on sharpness, reliability and continuous ranked probability score (CRPS) of the ensemble of probabilistic forecasts.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
Gradient optimization of finite projected entangled pair states
NASA Astrophysics Data System (ADS)
Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin
2017-05-01
Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.
The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).
Computer program CDCID: an automated quality control program using CDC update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, G.L.; Aguilar, F.
1984-04-01
A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
NASA Astrophysics Data System (ADS)
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Updating Human Factors Engineering Guidelines for Conducting Safety Reviews of Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
O, J.M.; Higgins, J.; Stephen Fleger - NRC
The U.S. Nuclear Regulatory Commission (NRC) reviews the human factors engineering (HFE) programs of applicants for nuclear power plant construction permits, operating licenses, standard design certifications, and combined operating licenses. The purpose of these safety reviews is to help ensure that personnel performance and reliability are appropriately supported. Detailed design review procedures and guidance for the evaluations is provided in three key documents: the Standard Review Plan (NUREG-0800), the HFE Program Review Model (NUREG-0711), and the Human-System Interface Design Review Guidelines (NUREG-0700). These documents were last revised in 2007, 2004 and 2002, respectively. The NRC is committed to the periodicmore » update and improvement of the guidance to ensure that it remains a state-of-the-art design evaluation tool. To this end, the NRC is updating its guidance to stay current with recent research on human performance, advances in HFE methods and tools, and new technology being employed in plant and control room design. This paper describes the role of HFE guidelines in the safety review process and the content of the key HFE guidelines used. Then we will present the methodology used to develop HFE guidance and update these documents, and describe the current status of the update program.« less
Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
NASA Astrophysics Data System (ADS)
Murakami, S.; Takemoto, T.; Ito, Y.
2012-07-01
The Japanese government, local governments and businesses are working closely together to establish spatial data infrastructures in accordance with the Basic Act on the Advancement of Utilizing Geospatial Information (NSDI Act established in August 2007). Spatial data infrastructures are urgently required not only to accelerate computerization of the public administration, but also to help restoration and reconstruction of the areas struck by the East Japan Great Earthquake and future disaster prevention and reduction. For construction of a spatial data infrastructure, various guidelines have been formulated. But after an infrastructure is constructed, there is a problem of maintaining it. In one case, an organization updates its spatial data only once every several years because of budget problems. Departments and sections update the data on their own without careful consideration. That upsets the quality control of the entire data system and the system loses integrity, which is crucial to a spatial data infrastructure. To ensure quality, ideally, it is desirable to update data of the entire area every year. But, that is virtually impossible, considering the recent budget crunch. The method we suggest is to update spatial data items of higher importance only in order to maintain quality, not updating all the items across the board. We have explored a method of partially updating the data of these two geographical features while ensuring the accuracy of locations. Using this method, data on roads and buildings that greatly change with time can be updated almost in real time or at least within a year. The method will help increase the availability of a spatial data infrastructure. We have conducted an experiment on the spatial data infrastructure of a municipality using those data. As a result, we have found that it is possible to update data of both features almost in real time.
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
Tang, G.; Yuan, F.; Bisht, G.; ...
2015-12-17
We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed
NASA Technical Reports Server (NTRS)
Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.
1997-01-01
This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.
Preliminary Model of Porphyry Copper Deposits
Berger, Byron R.; Ayuso, Robert A.; Wynn, Jeffrey C.; Seal, Robert R.
2008-01-01
The U.S. Geological Survey (USGS) Mineral Resources Program develops mineral-deposit models for application in USGS mineral-resource assessments and other mineral resource-related activities within the USGS as well as for nongovernmental applications. Periodic updates of models are published in order to incorporate new concepts and findings on the occurrence, nature, and origin of specific mineral deposit types. This update is a preliminary model of porphyry copper deposits that begins an update process of porphyry copper models published in USGS Bulletin 1693 in 1986. This update includes a greater variety of deposit attributes than were included in the 1986 model as well as more information about each attribute. It also includes an expanded discussion of geophysical and remote sensing attributes and tools useful in resource evaluations, a summary of current theoretical concepts of porphyry copper deposit genesis, and a summary of the environmental attributes of unmined and mined deposits.
Assessment of eutrophication in estuaries: Pressure-state-response and source apportionment
David Whitall; Suzanne Bricker
2006-01-01
The National Estuarine Eutrophication Assessment (NEEA) Update Program is a management oriented program designed to improve monitoring and assessment efforts through the development of type specific classification of estuaries that will allow improved assessment methods and development of analytical and research models and tools for managers which will help guide and...
Chu, X; Korzekwa, K; Elsby, R; Fenner, K; Galetin, A; Lai, Y; Matsson, P; Moss, A; Nagar, S; Rosania, GR; Bai, JPF; Polli, JW; Sugiyama, Y; Brouwer, KLR
2013-01-01
Intracellular concentrations of drugs and metabolites are often important determinants of efficacy, toxicity, and drug interactions. Hepatic drug distribution can be affected by many factors, including physicochemical properties, uptake/efflux transporters, protein binding, organelle sequestration, and metabolism. This white paper highlights determinants of hepatocyte drug/metabolite concentrations and provides an update on model systems, methods, and modeling/simulation approaches used to quantitatively assess hepatocellular concentrations of molecules. The critical scientific gaps and future research directions in this field are discussed. PMID:23588320
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantari, F; Wang, J; Li, T
2015-06-15
Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
NASA Technical Reports Server (NTRS)
Schmid, Beat; Michalsky, J.; Slater, D.; Barnard, J.; Halthore, R.; Liljegren, J.; Holben, B.; Eck, T.; Livingston, J.; Russell, P.;
2000-01-01
In the fall of 1997 the Atmospheric Radiation Measurement (ARM program conducted an intensive Observation Period (IOP) to study water vapor at its Southern Great Plains (SGP) site. Among the large number of instruments, four sun-tracking radiometers were present to measure the columnar water vapor (CWV). All four solar radiometers retrieve CWV by measuring solar transmittance in the 0.94-micrometer water vapor absorption band. As one of the steps in the CWV retrievals the aerosol component is subtracted from the total transmittance, in the 0.94-micrometer band. The aerosol optical depth comparisons among the same four radiometers are presented elsewhere. We have used three different methods to retrieve CWV. Without attempting to standardize on the same radiative transfer model and its underlying water vapor spectroscopy we found the CWV to agree within 0.13 cm (rms) for CWV values ranging from 1 to 5 cm. Preliminary results obtained when using the same updated radiative transfer model with updated spectroscopy for all instruments will also be shown. Comparisons to the microwave radiometer results will be included in the comparisons.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Self-supervised online metric learning with low rank constraint for scene categorization.
Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo
2013-08-01
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.
[Purity Detection Model Update of Maize Seeds Based on Active Learning].
Tang, Jin-ya; Huang, Min; Zhu, Qi-bing
2015-08-01
Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.
Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation
NASA Technical Reports Server (NTRS)
2012-01-01
At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model1 reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties--2010 . The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables. The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for estimating risk and uncertainty in the proposed model is broadly similar to that used for the current (2005) NASA model and is based on recommendations by the National Council on Radiation Protection and Measurements. However, NASA's proposed model has significant changes with respect to the following: the integration of new findings and methods into its components by taking into account newer epidemiological data and analyses, new radiobiological data indicating that quality factors differ for leukemia and solid cancers, an improved method for specifying quality factors in terms of radiation track structure concepts as opposed to the previous approach based on linear energy transfer, the development of a new solar particle event (SPE) model, and the updates to galactic cosmic ray (GCR) and shielding transport models. The newer epidemiological information includes updates to the cancer incidence rates from the life span study (LSS) of the Japanese atomic bomb survivors, transferred to the U.S. population and converted to cancer mortality rates from U.S. population statistics. In addition, the proposed model provides an alternative analysis applicable to lifetime never-smokers (NSs). Details of the uncertainty analysis in the model have also been updated and revised. NASA's proposed model and associated uncertainties are complex in their formulation and as such require a very clear and precise set of descriptions. The committee found the 2011 NASA report challenging to review largely because of the lack of clarity in the model descriptions and derivation of the various parameters used. The committee requested some clarifications from NASA throughout its review and was able to resolve many, but not all, of the ambiguities in the written description.
An investigation of soil-structure interaction effects observed at the MIT Green Building
Taciroglu, Ertugrul; Çelebi, Mehmet; Ghahari, S. Farid; Abazarsa, Fariba
2016-01-01
The soil-foundation impedance function of the MIT Green Building is identified from its response signals recorded during an earthquake. Estimation of foundation impedance functions from seismic response signals is a challenging task, because: (1) the foundation input motions (FIMs) are not directly measurable, (2) the as-built properties of the super-structure are only approximately known, and (3) the soil-foundation impedance functions are inherently frequency-dependent. In the present study, aforementioned difficulties are circumvented by using, in succession, a blind modal identification (BMID) method, a simplified Timoshenko beam model (TBM), and a parametric updating of transfer functions (TFs). First, the flexible-base modal properties of the building are identified from response signals using the BMID method. Then, a flexible-base TBM is updated using the identified modal data. Finally, the frequency-dependent soil-foundation impedance function is estimated by minimizing the discrepancy between TFs (of pairs instrumented floors) that are (1) obtained experimentally from earthquake data and (2) analytically from the updated TBM. Using the fully identified flexible-base TBM, the FIMs as well as building responses at locations without instruments can be predicted, as demonstrated in the present study.
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Simulated and observed 2010 floodwater elevations in the Pawcatuck and Wood Rivers, Rhode Island
Zarriello, Phillip J.; Straub, David E.; Smith, Thor E.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term U.S. Geological Survey streamgages in Rhode Island. In response to this flood, hydraulic models of Pawcatuck River (26.9 miles) and Wood River (11.6 miles) were updated from the most recent approved U.S. Department of Homeland Security-Federal Emergency Management Agency flood insurance study (FIS) to simulate water-surface elevations (WSEs) for specified flows and boundary conditions. The hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) using steady-state simulations and incorporate new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were used to simulate the 0.2-percent annual exceedance probability (AEP) flood, which is the AEP determined for the 2010 flood in the Pawcatuck and Wood Rivers. The simulated WSEs were compared to high-water mark (HWM) elevation data obtained in a related study following the March–April 2010 flood, which included 39 HWMs along the Pawcatuck River and 11 HWMs along the Wood River. The 2010 peak flow generally was larger than the 0.2-percent AEP flow, which, in part, resulted in the FIS and updated model WSEs to be lower than the 2010 HWMs. The 2010 HWMs for the Pawcatuck River averaged about 1.6 feet (ft) higher than the 0.2-percent AEP WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The 2010 HWMs for the Wood River averaged about 1.3 ft higher than the WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Frequency Response Function Based Damage Identification for Aerospace Structures
NASA Astrophysics Data System (ADS)
Oliver, Joseph Acton
Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.
Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei
2018-05-24
To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
NASA Technical Reports Server (NTRS)
Newman, C. M.
1977-01-01
The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.
Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang
2017-10-16
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang
2017-01-01
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295
ERIC Educational Resources Information Center
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L.
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Multibeam Altimeter Navigation Update Using Faceted Shape Model
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul; Broschart, Steve
2008-01-01
A method of incorporating information, acquired by a multibeam laser or radar altimeter system, pertaining to the distance and direction between the system and a nearby target body, into an estimate of the state of a vehicle upon which the system is mounted, involves the use of a faceted model to represent the shape of the target body. Fundamentally, what one seeks to measure is the distance from the vehicle to the target body.
OSATE Overview & Community Updates
2015-02-15
update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case
Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo
2018-01-01
Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Description and evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced
Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook
2015-11-06
In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.
Wang, Lingling; Fu, Li
2018-01-01
In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323
Updating the Behavior Engineering Model.
ERIC Educational Resources Information Center
Chevalier, Roger
2003-01-01
Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)
Use of indexing to update United States annual timber harvest by state
James Howard; Enrique Quevedo; Andrew Kramp
2009-01-01
This report provides an index method that can be used to update recent estimates of timber harvest by state to a common current year and to make 5-year projections. The Forest Service Forest Inventory and Analysis (FIA) program makes estimates of harvest for each state in differing years. The purpose of this updating method is to bring each state-level estimate up to a...
Ellis, K J; Bell, S J; Chertow, G M; Chumlea, W C; Knox, T A; Kotler, D P; Lukaski, H C; Schoeller, D A
1999-01-01
In 1994, the National Institutes of Health (NIH) convened a Technology Assessment Conference "to provide physicians with a responsible assessment of bioelectrical impedance analysis (BIA) technology for body composition measurement." In 1997, Serono Symposia USA, Inc., organized an invited panel of scientists and clinicians, with extensive research and clinical experience with BIA, to provide an update. Panel members presented reviews based on their own work and published studies for the intervening years. Updates were provided on the single and multifrequency BIA methods and models; continued clinical research experiences; efforts toward establishing population reference norms; and the feasibility of establishing guidelines for potential diagnostic use of BIA in a clinical setting. This report provides a summary of the panel's findings including a consensus on several technical and clinical issues related to the research use of BIA, and those areas that are still in need of additional study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.
2013-07-01
ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.
2013-07-01
ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less
Fast Deep Tracking via Semi-Online Domain Adaptation
NASA Astrophysics Data System (ADS)
Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen
2018-04-01
Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
Semi Automated Land Cover Layer Updating Process Utilizing Spectral Analysis and GIS Data Fusion
NASA Astrophysics Data System (ADS)
Cohen, L.; Keinan, E.; Yaniv, M.; Tal, Y.; Felus, A.; Regev, R.
2018-04-01
Technological improvements made in recent years of mass data gathering and analyzing, influenced the traditional methods of updating and forming of the national topographic database. It has brought a significant increase in the number of use cases and detailed geo information demands. Processes which its purpose is to alternate traditional data collection methods developed in many National Mapping and Cadaster Agencies. There has been significant progress in semi-automated methodologies aiming to facilitate updating of a topographic national geodatabase. Implementation of those is expected to allow a considerable reduction of updating costs and operation times. Our previous activity has focused on building automatic extraction (Keinan, Zilberstein et al, 2015). Before semiautomatic updating method, it was common that interpreter identification has to be as detailed as possible to hold most reliable database eventually. When using semi-automatic updating methodologies, the ability to insert human insights based knowledge is limited. Therefore, our motivations were to reduce the created gap by allowing end-users to add their data inputs to the basic geometric database. In this article, we will present a simple Land cover database updating method which combines insights extracted from the analyzed image, and a given spatial data of vector layers. The main stages of the advanced practice are multispectral image segmentation and supervised classification together with given vector data geometric fusion while maintaining the principle of low shape editorial work to be done. All coding was done utilizing open source software components.
Information dissemination model for social media with constant updates
NASA Astrophysics Data System (ADS)
Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui
2018-07-01
With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.
Seismic hazard in the eastern United States
Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison
2015-01-01
The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.
NASA Astrophysics Data System (ADS)
Fuller, C. W.; Unruh, J.; Lindvall, S.; Lettis, W.
2009-05-01
An integral component of the safety analysis for proposed nuclear power plants within the US is a probabilistic seismic hazard assessment (PSHA). Most applications currently under NRC review followed guidance provided within NRC Regulatory Guide 1.208 (RG 1.208) for developing seismic source characterizations (SSC) for their PSHA. Three key components of RG 1.208 guidance is that applicants should: (1) use existing PSHA models and SSCs accepted by the NRC as SSC as a starting point for their SSCs; (2) evaluate new information and data developed since acceptance of the starting model to determine if the model should be updated; and (3) follow guidelines set forth by the Senior Seismic Hazard Analysis Committee (SSHAC) (NUREG/CR-6372) in developing significant updates (i.e., updates should capture SSC uncertainty through representing the "center, body, and range of technical interpretations" of the informed technical community). Major motivations for following this guidance are to ensure accurate representations of hazard and regulatory stability in hazard estimates for nuclear power plants. All current applications with the NRC have used the EPRI-SOG source characterizations developed in the 1980s as their starting point model, and all applicants have followed RG 1.208 guidance in updating the EPRI- SOG model. However, there has been considerable variability in how applicants have interpreted the guidance, and thus there has been considerable variability in the methodology used in updating the SSCs. Much of the variability can be attributed to how different applicants have interpreted the implications of new data, new interpretations of new and/or old data, and new "opinions" of members of the informed technical community. For example, many applicants and the NRC have wrestled with the challenge of whether or not to update SSCs in light of new opinions or interpretations of older data put forth by one member of the technical community. This challenge has been further complicated by: (1) a given applicant's uncertainty in how to revise the EPRI-SOG model, which was developed using a process similar to that dictated by SSHAC for a level 3 or 4 study, without conducting a resource-intensive SSHAC level 3 or higher study for their respective application; and (2) a lack of guidance from the NRC on acceptable methods of demonstrating that new data, interpretations, and opinions are adequately represented within the EPRI-SOG model. Partly because of these issues, initiative was taken by the nuclear industry, NRC and DOE to develop a new base PSHA model for the central and eastern US. However, this new SSC model will not be completed for several years and does not resolve many of the fundamental regulatory and philosophical issues that have been raised during the current round of applications. To ensure regulatory stability and to provide accurate estimates of hazard for nuclear power plants, a dialog must be started between regulators and industry to resolve these issues. Two key issues that must be discussed are: (1) should new data and new interpretations or opinions of old data be treated differently in updated SSCs, and if so, how?; and (2) how can new data or interpretations developed by a small subset of the technical community be weighed against and potentially combined with a SSC model that was originally developed to capture the "center, body and range" of the technical community?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less
Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.
Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone
2017-05-31
Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.
NASA Technical Reports Server (NTRS)
Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.
1993-01-01
Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.
Parallel Implementation of 3-D Iterative Reconstruction With Intra-Thread Update for the jPET-D4
NASA Astrophysics Data System (ADS)
Lam, Chih Fung; Yamaya, Taiga; Obi, Takashi; Yoshida, Eiji; Inadama, Naoko; Shibuya, Kengo; Nishikido, Fumihiko; Murayama, Hideo
2009-02-01
One way to speed-up iterative image reconstruction is by parallel computing with a computer cluster. However, as the number of computing threads increases, parallel efficiency decreases due to network transfer delay. In this paper, we proposed a method to reduce data transfer between computing threads by introducing an intra-thread update. The update factor is collected from each slave thread and a global image is updated as usual in the first K sub-iteration. In the rest of the sub-iterations, the global image is only updated at an interval which is controlled by a parameter L. In between that interval, the intra-thread update is carried out whereby an image update is performed in each slave thread locally. We investigated combinations of K and L parameters based on parallel implementation of RAMLA for the jPET-D4 scanner. Our evaluation used four workstations with a total of 16 slave threads. Each slave thread calculated a different set of LORs which are divided according to ring difference numbers. We assessed image quality of the proposed method with a hotspot simulation phantom. The figure of merit was the full-width-half-maximum of hotspots and the background normalized standard deviation. At an optimum K and L setting, we did not find significant change in the output images. We also applied the proposed method to a Hoffman phantom experiment and found the difference due to intra-thread update was negligible. With the intra-thread update, computation time could be reduced by about 23%.
FORCARB2: An updated version of the U.S. Forest Carbon Budget Model
Linda S. Heath; Michael C. Nichols; James E. Smith; John R. Mills
2010-01-01
FORCARB2, an updated version of the U.S. FORest CARBon Budget Model (FORCARB), produces estimates of carbon stocks and stock changes for forest ecosystems and forest products at 5-year intervals. FORCARB2 includes a new methodology for carbon in harvested wood products, updated initial inventory data, a revised algorithm for dead wood, and now includes public forest...
Update of the Accounting Surface Along the Lower Colorado River
Wiele, Stephen M.; Leake, Stanley A.; Owen-Joyce, Sandra J.; McGuire, Emmet H.
2008-01-01
The accounting-surface method was developed in the 1990s by the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, to identify wells outside the flood plain of the lower Colorado River that yield water that will be replaced by water from the river. This method was needed to identify which wells require an entitlement for diversion of water from the Colorado River and need to be included in accounting for consumptive use of Colorado River water as outlined in the Consolidated Decree of the United States Supreme Court in Arizona v. California. The method is based on the concept of a river aquifer and an accounting surface within the river aquifer. The study area includes the valley adjacent to the lower Colorado River and parts of some adjacent valleys in Arizona, California, Nevada, and Utah and extends from the east end of Lake Mead south to the southerly international boundary with Mexico. Contours for the original accounting surface were hand drawn based on the shape of the aquifer, water-surface elevations in the Colorado River and drainage ditches, and hydrologic judgment. This report documents an update of the original accounting surface based on updated water-surface elevations in the Colorado River and drainage ditches and the use of simple, physically based ground-water flow models to calculate the accounting surface in four areas adjacent to the free-flowing river.
Soong, David T.; Over, Thomas M.
2015-01-01
Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.
Probabilistic Methods for Structural Reliability and Risk
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.
Probabilistic Methods for Structural Reliability and Risk
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.
Section-constrained local geological interface dynamic updating method based on the HRBF surface
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Wu, Lixin; Zhou, Wenhui; Li, Chaoling; Li, Fengdan
2018-02-01
Boundaries, attitudes and sections are the most common data acquired from regional field geological surveys, and they are used for three-dimensional (3D) geological modelling. However, constructing topologically consistent 3D geological models from rapid and automatic regional modelling with convenient local modifications remains unresolved. In previous works, the Hermite radial basis function (HRBF) surface was introduced for the simulation of geological interfaces from geological boundaries and attitudes, which allows 3D geological models to be automatically extracted from the modelling area by the interfaces. However, the reasonability and accuracy of non-supervised subsurface modelling is limited without further modifications generated through explanations and analyses performed by geology experts. In this paper, we provide flexible and convenient manual interactive manipulation tools for geologists to sketch constraint lines, and these tools may help geologists transform and apply their expert knowledge to the models. In the modified modelling workflow, the geological sections were treated as auxiliary constraints to construct more reasonable 3D geological models. The geometric characteristics of section lines were abstracted to coordinates and normal vectors, and along with the transformed coordinates and vectors from boundaries and attitudes, these characteristics were adopted to co-calculate the implicit geological surface function parameters of the HRBF equations and form constrained geological interfaces from topographic (boundaries and attitudes) and subsurface data (sketched sections). Based on this new modelling method, a prototype system was developed, in which the section lines could be imported from databases or interactively sketched, and the models could be immediately updated after the new constraints were added. Experimental comparisons showed that all boundary, attitude and section data are well represented in the constrained models, which are consistent with expert explanations and help improve the quality of the models.
UPDATE ON EPA'S URBAN WATERSHED MANAGEMENT BRANCH MODELING ACTIVITIES
This paper provides the Stormwater Management Model (SWMM) user community with a description of the Environmental Protection Agency (EPA's) Office of Research and Development (ORD) approach to urban watershed modeling research and provides an update on current ORD SWMM-related pr...
WHEN ONSET MEETS DESISTANCE: COGNITIVE TRANSFORMATION AND ADOLESCENT MARIJUANA EXPERIMENTATION*
Kreager, Derek A.; Ragan, Daniel T.; Nguyen, Holly; Staff, Jeremy
2016-01-01
Purpose Desistance scholars primarily focus on changing social roles, cognitive transformations, and shifting identities to understand the cessation of serious crime and illicit drug use in adulthood. In the current study, we move the spotlight away from adulthood and toward adolescence, the developmental stage when the prevalence of offending and substance use peak and desistance from most of these behaviors begins. Our primary hypothesis is that changes in perceived psychic rewards surrounding initial forays into marijuana use strongly predict adolescents’ decisions to cease or persist that behavior. In addition, based on social learning expectations, we hypothesize that peer perceptions and behaviors provide mechanisms for perceptual change. Methods We test these hypotheses using longitudinal data of marijuana use, perceptions, and peer networks from the PROmoting School-community-university Partnerships to Enhance Resilience (PROSPER) study. We estimate hazard models of marijuana initiation and within-person models of perceptual updating for youth from grades 6 to 12 (n=6,154). Results We find that changes in marijuana’s perceived psychic rewards surrounding initiation differentiated experimenters from persisters. Experimenters had significantly lower updated perceptions of marijuana as a fun behavior compared to persisters and these perceptions dropped after the initiation wave. In contrast, persisters updated their perceptions in upward directions and maintained more positive perceptions over time. Inconsistent with social learning expectations, initiators’ updated perceptions of marijuana as a fun activity were not explained by peer-reported behaviors or attitudes. PMID:27478762
Quasi-continuous stochastic simulation framework for flood modelling
NASA Astrophysics Data System (ADS)
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
NASA Astrophysics Data System (ADS)
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
The influence of parametric and external noise in act-and-wait control with delayed feedback.
Wang, Jiaxing; Kuske, Rachel
2017-11-01
We apply several novel semi-analytic approaches for characterizing and calculating the effects of noise in a system with act-and-wait control. For concrete illustration, we apply these to a canonical balance model for an inverted pendulum to study the combined effect of delay and noise within the act-and-wait setting. While the act-and-wait control facilitates strong stabilization through deadbeat control, a comparison of different models with continuous vs. discrete updating of the control strategy in the active period illustrates how delays combined with the imprecise application of the control can seriously degrade the performance. We give several novel analyses of a generalized act-and-wait control strategy, allowing flexibility in the updating of the control strategy, in order to understand the sensitivities to delays and random fluctuations. In both the deterministic and stochastic settings, we give analytical and semi-analytical results that characterize and quantify the dynamics of the system. These results include the size and shape of stability regions, densities for the critical eigenvalues that capture the rate of reaching the desired stable equilibrium, and amplification factors for sustained fluctuations in the context of external noise. They also provide the dependence of these quantities on the length of the delay and the active period. In particular, we see that the combined influence of delay, parametric error, or external noise and on-off control can qualitatively change the dynamics, thus reducing the robustness of the control strategy. We also capture the dependence on how frequently the control is updated, allowing an interpolation between continuous and frequent updating. In addition to providing insights for these specific models, the methods we propose are generalizable to other settings with noise, delay, and on-off control, where analytical techniques are otherwise severely scarce.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
Selective updating of working memory content modulates meso-cortico-striatal activity.
Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S
2011-08-01
Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.
An architecture for designing fuzzy logic controllers using neural networks
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.
NASA Astrophysics Data System (ADS)
Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing
2014-11-01
Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.
NASA Technical Reports Server (NTRS)
Carlson, Toby N.
1988-01-01
Using model development, image analysis and micrometeorological measurements, the object is to push beyond the present limitations of using the infrared temperature method for remotely determining surface energy fluxes and soil moisture over vegetation. Model development consists of three aspects: (1) a more complex vegetation formulation which is more flexible and realistic; (2) a method for modeling the fluxes over patchy vegetation cover; and (3) a method for inferring a two-layer soil vertical moisture gradient from analyses of horizontal variations in surface temperatures. HAPEX and FIFE satellite data will be used along with aircraft thermal infrared and solar images as input for the models. To test the models, moisture availability and bulk canopy resistances will be calculated from data collected locally at the Rock Springs experimental field site and, eventually, from the FIFE project.
Deep neural network and noise classification-based speech enhancement
NASA Astrophysics Data System (ADS)
Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei
2017-07-01
In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
Automated wind load characterization of wind turbine structures by embedded model updating
NASA Astrophysics Data System (ADS)
Swartz, R. Andrew; Zimmerman, Andrew T.; Lynch, Jerome P.
2010-04-01
The continued development of renewable energy resources is for the nation to limit its carbon footprint and to enjoy independence in energy production. Key to that effort are reliable generators of renewable energy sources that are economically competitive with legacy sources. In the area of wind energy, a major contributor to the cost of implementation is large uncertainty regarding the condition of wind turbines in the field due to lack of information about loading, dynamic response, and fatigue life of the structure expended. Under favorable circumstances, this uncertainty leads to overly conservative designs and maintenance schedules. Under unfavorable circumstances, it leads to inadequate maintenance schedules, damage to electrical systems, or even structural failure. Low-cost wireless sensors can provide more certainty for stakeholders by measuring the dynamic response of the structure to loading, estimating the fatigue state of the structure, and extracting loading information from the structural response without the need of an upwind instrumentation tower. This study presents a method for using wireless sensor networks to estimate the spectral properties of a wind turbine tower loading based on its measured response and some rudimentary knowledge of its structure. Structural parameters are estimated via model-updating in the frequency domain to produce an identification of the system. The updated structural model and the measured output spectra are then used to estimate the input spectra. Laboratory results are presented indicating accurate load characterization.
Mester, Jessica L.; Mercer, MaryBeth; Goldenberg, Aaron; Moore, Rebekah A.; Eng, Charis; Sharp, Richard R.
2015-01-01
Background Research biobanks collect biological samples and health information. Previous work shows that biobank participants desire general study updates, but preferences regarding the method or frequency of these communications have not been explored. Thus, we surveyed participants in a long-standing research biobank. Methods Eligible participants were drawn from a study of patients with personal/family history suggestive of Cowden syndrome, a poorly-recognized inherited cancer syndrome. Participants gave blood samples and access to medical records and received individual results but had no other study interactions. The biobank had 3618 participants at sampling. Survey eligibility included age ≥18 years, enrollment within the biobank’s first five years, normal PTEN analysis, and contiguous United States address. Multivariate logistic regression analyses identified predictors of participant interest in internet-based vs. offline methods and methods allowing participant-researcher interaction vs. one-way communication. Independent variables were narrowed by independent Pearson correlations by cutoff p<0.2, with p<0.02 considered significant. Results Surveys were returned from 840/1267 (66%) eligible subjects. Most (97%) wanted study updates with 92% wanting updates at least once a year. Participants preferred paper (66%) or emailed (62%) newsletter methods with 95% selecting one of these. Older, less-educated, and lower-income respondents strongly preferred offline approaches (p<0.001). Most (93%) had no concerns about receiving updates and 97% were willing to provide health updates to researchers. Conclusion Most participants were comfortable receiving and providing updated information. Demographic factors predicted communication preferences. Impact Researchers should make plans for ongoing communication early in study development and funders should support the necessary infrastructure for these efforts. PMID:25597748
NASA Astrophysics Data System (ADS)
Niazi, A.; Bentley, L. R.; Hayashi, M.
2016-12-01
Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.
Vectorized schemes for conical potential flow using the artificial density method
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.
1984-01-01
A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.
2011-01-01
successful in any operational scenario. In FY 2011, the Army updated the Cost of the Doctrinal Army Model using improved and refined methods and the...8,100 severely wounded Soldiers and veterans with cases spanning from Post- Traumatic Stress Disorder to double amputees. This population is supported... method dubbed “EoIP” (Everything over Internet Protocol). Getting the right information at the right time requires universal accessibility
Statistical Moments in Variable Density Incompressible Mixing Flows
2015-08-28
front tracking method: Verification and application to simulation of the primary breakup of a liquid jet . SIAM J. Sci. Comput., 33:1505–1524, 2011. [15... elliptic problem. In case of failure, Generalized Minimal Residual (GMRES) method [78] is used instead. Then update face velocities as follows: u n+1...of the ACM Solid and Physical Modeling Symposium, pages 159–170, 2008. [51] D. D. Joseph. Fluid dynamics of two miscible liquids with diffusion and
Cortical Surface Registration for Image-Guided Neurosurgery Using Laser-Range Scanning
Sinha, Tuhin K.; Cash, David M.; Galloway, Robert L.; Weil, Robert J.
2013-01-01
In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 ± 0.2, 2.0 ± 0.3, and 1.2 ± 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 ± 1.0, 0 9 ± 0.6, and 1.3 ± 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework. PMID:12906252
Quasi-Newton parallel geometry optimization methods
NASA Astrophysics Data System (ADS)
Burger, Steven K.; Ayers, Paul W.
2010-07-01
Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.
SysML model of exoplanet archive functionality and activities
NASA Astrophysics Data System (ADS)
Ramirez, Solange
2016-08-01
The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.
NASA Astrophysics Data System (ADS)
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine
Liu, Zhiyuan; Wang, Changhui
2015-01-01
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
NASA Technical Reports Server (NTRS)
1980-01-01
Methods using snowcovered area to update seasonal forecasts as snowmelt progresses are also being used in quasi-operational situations. The input of snowcovered area to snowmelt models for short term perdictions was attempted in two ways; namely, the modification of existing hydrologic models and/or the use of models that were specifically designed to use snowcovered area. A daily snowmelt runoff model was used with LANDSAT data to simulate discharge on remote basins in the Wind River Mountains of Wyoming. Daily predicted and actual flows compare closely, and, summarized over the entire snowmelt season (April 1 - September 30), the average difference is only three percent. The model and snowcovered area data are currently being tested on additional watersheds to determine the method's transferability.
Electronic Education System Model-2
ERIC Educational Resources Information Center
Güllü, Fatih; Kuusik, Rein; Laanpere, Mart
2015-01-01
In this study we presented new EES Model-2 extended from EES model for more productive implementation in e-learning process design and modelling in higher education. The most updates were related to uppermost instructional layer. We updated learning processes object of the layer for adaptation of educational process for young and old people,…
2010-09-01
raytracing and travel-time calculation in 3D Earth models, such as the finite-difference eikonal method (e.g., Podvin and Lecomte, 1991), fast...by Reiter and Rodi (2009) in constructing JWM. Two teleseismic data sets were considered, both extracted from the EHB database (Engdahl et al...extracted from the updated EHB database distributed by the International Seismological Centre (http://www.isc.ac.uk/EHB/index.html). The new database
Porto, William F.; Pires, Állan S.; Franco, Octavio L.
2012-01-01
The antimicrobial peptides (AMP) have been proposed as an alternative to control resistant pathogens. However, due to multifunctional properties of several AMP classes, until now there has been no way to perform efficient AMP identification, except through in vitro and in vivo tests. Nevertheless, an indication of activity can be provided by prediction methods. In order to contribute to the AMP prediction field, the CS-AMPPred (Cysteine-Stabilized Antimicrobial Peptides Predictor) is presented here, consisting of an updated version of the Support Vector Machine (SVM) model for antimicrobial activity prediction in cysteine-stabilized peptides. The CS-AMPPred is based on five sequence descriptors: indexes of (i) α-helix and (ii) loop formation; and averages of (iii) net charge, (iv) hydrophobicity and (v) flexibility. CS-AMPPred was based on 310 cysteine-stabilized AMPs and 310 sequences extracted from PDB. The polynomial kernel achieves the best accuracy on 5-fold cross validation (85.81%), while the radial and linear kernels achieve 84.19%. Testing in a blind data set, the polynomial and radial kernels achieve an accuracy of 90.00%, while the linear model achieves 89.33%. The three models reach higher accuracies than previously described methods. A standalone version of CS-AMPPred is available for download at
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
NODA for EPA's Updated Ozone Transport Modeling
Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom
2012-08-01
NHTSA recently completed a logistic regression analysis updating its 2003 and 2010 studies of the relationship between vehicle mass and US fatality risk per vehicle mile traveled (VMT). The new study updates the previous analyses in several ways: updated FARS data from 2002 to 2008 for MY00 to MY07 vehicles are used; induced exposure data from police reported crashes in several additional states are added; a new vehicle category for car-based crossover utility vehicles (CUVs) and minivans is created; crashes with other light-duty vehicles are divided into two groups based on the crash partner vehicle’s weight, and a category formore » all other fatal crashes is added; and new control variables for new safety technologies and designs, such as electronic stability controls (ESC), side airbags, and methods to meet voluntary agreement to improve light truck compatibility with cars, are included.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom
2011-09-01
NHTSA recently completed a logistic regression analysis updating its 2003 and 2010 studies of the relationship between vehicle mass and US fatality risk per vehicle mile traveled (VMT). The new study updates the previous analyses in several ways: updated FARS data from 2002 to 2008 for MY00 to MY07 vehicles are used; induced exposure data from police reported crashes in several additional states are added; a new vehicle category for car-based crossover utility vehicles (CUVs) and minivans is created; crashes with other light-duty vehicles are divided into two groups based on the crash partner vehicle’s weight, and a category formore » all other fatal crashes is added; and new control variables for new safety technologies and designs, such as electronic stability controls (ESC), side airbags, and methods to meet voluntary agreement to improve light truck compatibility with cars, are included.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom
2012-08-01
NHTSA recently completed a logistic regression analysis (Kahane 2012) updating its 2003 and 2010 studies of the relationship between vehicle mass and US fatality risk per vehicle mile traveled (VMT). The new study updates the previous analyses in several ways: updated FARS data for 2002 to 2008 involving MY00 to MY07 vehicles are used; induced exposure data from police reported crashes in several additional states are added; a new vehicle category for car-based crossover utility vehicles (CUVs) and minivans is created; crashes with other light-duty vehicles are divided into two groups based on the crash partner vehicle’s weight, and amore » category for all other fatal crashes is added; and new control variables for new safety technologies and designs, such as electronic stability controls (ESC), side airbags, and methods to meet voluntary agreement to improve light truck compatibility with cars, are included.« less
Determination of replicate composite bone material properties using modal analysis.
Leuridan, Steven; Goossens, Quentin; Pastrav, Leonard; Roosen, Jorg; Mulier, Michiel; Denis, Kathleen; Desmet, Wim; Sloten, Jos Vander
2017-02-01
Replicate composite bones are used extensively for in vitro testing of new orthopedic devices. Contrary to tests with cadaveric bone material, which inherently exhibits large variability, they offer a standardized alternative with limited variability. Accurate knowledge of the composite's material properties is important when interpreting in vitro test results and when using them in FE models of biomechanical constructs. The cortical bone analogue material properties of three different fourth-generation composite bone models were determined by updating FE bone models using experimental and numerical modal analyses results. The influence of the cortical bone analogue material model (isotropic or transversely isotropic) and the inter- and intra-specimen variability were assessed. Isotropic cortical bone analogue material models failed to represent the experimental behavior in a satisfactory way even after updating the elastic material constants. When transversely isotropic material models were used, the updating procedure resulted in a reduction of the longitudinal Young's modulus from 16.00GPa before updating to an average of 13.96 GPa after updating. The shear modulus was increased from 3.30GPa to an average value of 3.92GPa. The transverse Young's modulus was lowered from an initial value of 10.00GPa to 9.89GPa. Low inter- and intra-specimen variability was found. Copyright © 2016 Elsevier Ltd. All rights reserved.
A review of methods for updating forest monitoring system estimates
Hector Franco-Lopez; Alan R. Ek; Andrew P. Robinson
2000-01-01
Intensifying interest in forests and the development of new monitoring technologies have induced major changes in forest monitoring systems in the last few years, including major revisions in the methods used for updating. This paper describes the methods available for projecting stand- and plot-level information, emphasizing advantages and disadvantages, and the...
Xian, George Z.; Homer, Collin G.
2009-01-01
The U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 is widely used as a baseline for national land cover and impervious conditions. To ensure timely and relevant data, it is important to update this base to a more recent time period. A prototype method was developed to update the land cover and impervious surface by individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season from both 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, impervious surface was estimated for areas of change by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain a variety of metropolitan areas. Results from the five study areas show that the vast majority of impervious surface changes associated with urban developments were accurately captured and updated. The approach optimizes mapping efficiency and can provide users a flexible method to generate updated impervious surface at national and regional scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stover, Tracy E.; Baker, James S.; Ratliff, Michael D.
The classic Limiting Surface Density (LSD) method is an empirical calculation technique for analyzing and setting mass limits for fissile items in storage arrays. LSD is a desirable method because it can reduce or eliminate the need for lengthy detailed Monte Carlo models of storage arrays. The original (or classic) method was developed based on idealized arrays of bare spherical metal items in air-spaced cubic units in a water-reflected cubic array. In this case, the geometric and material-based surface densities were acceptably correlated by linear functions. Later updates to the method were made to allow for concrete reflection rather thanmore » water, cylindrical masses rather than spheres, different material forms, and noncubic arrays. However, in the intervening four decades since those updates, little work has been done to update the method, especially for use with contemporary highly heterogeneous shipping packages that are noncubic and stored in noncubic arrays. In this work, the LSD method is reevaluated for application to highly heterogeneous shipping packages for fissile material. The package modeled is the 9975 shipping package, currently the primary package used to store fissile material at Savannah River Site’s K-Area Complex. The package is neither cubic nor rectangular but resembles nested cylinders of stainless steel, lead, aluminum, and Celotex. The fissile content is assumed to be a cylinder of plutonium metal. The packages may be arranged in arrays with both an equal number of packages per side (package cubic) and an unequal number of packages per side (noncubic). The cubic arrangements are used to derive the 9975-specific material and geometry constants for the classic linear form LSD method. The linear form of the LSD, with noncubic array adjustment, is applied and evaluated against computational models for these packages to determine the critical unit fissile mass. Sensitivity equations are derived from the classic method, and these are also used to make projections of the critical unit fissile mass. It was discovered that the heterogeneous packages have a nonlinear surface density versus critical mass relationship compared to the acceptably linear response of bare spherical fissile masses. Methodology is developed to address the nonlinear response. In so doing, the solution to the nonlinear LSD method becomes decoupled from the critical mass of a single unit, adding to its flexibility. The ability of the method to predict changes in neutron multiplication due to perturbations in a parameter is examined to provide a basis for analyzing upset conditions. In conclusion, a full rederivation of the classic LSD method from diffusion theory is also included as this was found to be lacking in the available literature.« less
Stover, Tracy E.; Baker, James S.; Ratliff, Michael D.; ...
2018-03-02
The classic Limiting Surface Density (LSD) method is an empirical calculation technique for analyzing and setting mass limits for fissile items in storage arrays. LSD is a desirable method because it can reduce or eliminate the need for lengthy detailed Monte Carlo models of storage arrays. The original (or classic) method was developed based on idealized arrays of bare spherical metal items in air-spaced cubic units in a water-reflected cubic array. In this case, the geometric and material-based surface densities were acceptably correlated by linear functions. Later updates to the method were made to allow for concrete reflection rather thanmore » water, cylindrical masses rather than spheres, different material forms, and noncubic arrays. However, in the intervening four decades since those updates, little work has been done to update the method, especially for use with contemporary highly heterogeneous shipping packages that are noncubic and stored in noncubic arrays. In this work, the LSD method is reevaluated for application to highly heterogeneous shipping packages for fissile material. The package modeled is the 9975 shipping package, currently the primary package used to store fissile material at Savannah River Site’s K-Area Complex. The package is neither cubic nor rectangular but resembles nested cylinders of stainless steel, lead, aluminum, and Celotex. The fissile content is assumed to be a cylinder of plutonium metal. The packages may be arranged in arrays with both an equal number of packages per side (package cubic) and an unequal number of packages per side (noncubic). The cubic arrangements are used to derive the 9975-specific material and geometry constants for the classic linear form LSD method. The linear form of the LSD, with noncubic array adjustment, is applied and evaluated against computational models for these packages to determine the critical unit fissile mass. Sensitivity equations are derived from the classic method, and these are also used to make projections of the critical unit fissile mass. It was discovered that the heterogeneous packages have a nonlinear surface density versus critical mass relationship compared to the acceptably linear response of bare spherical fissile masses. Methodology is developed to address the nonlinear response. In so doing, the solution to the nonlinear LSD method becomes decoupled from the critical mass of a single unit, adding to its flexibility. The ability of the method to predict changes in neutron multiplication due to perturbations in a parameter is examined to provide a basis for analyzing upset conditions. In conclusion, a full rederivation of the classic LSD method from diffusion theory is also included as this was found to be lacking in the available literature.« less
Modal phase measuring deflectometry
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-10-14
Here in this work, a model based method is applied to phase measuring deflectometry, which is named as modal phase measuring deflectometry. The height and slopes of the surface under test are represented by mathematical models and updated by optimizing the model coefficients to minimize the discrepancy between the reprojection in ray tracing and the actual measurement. The pose of the screen relative to the camera is pre-calibrated and further optimized together with the shape coefficients of the surface under test. Simulations and experiments are conducted to demonstrate the feasibility of the proposed approach.
Diffusion measurement from observed transverse beam echoes
Sen, Tanaji; Fischer, Wolfram
2017-01-09
For this research, we study the measurement of transverse diffusion through beam echoes. We revisit earlier observations of echoes in RHIC and apply an updated theoretical model to these measurements. We consider three possible models for the diffusion coefficient and show that only one is consistent with measured echo amplitudes and pulse widths. This model allows us to parameterize the diffusion coefficients as functions of bunch charge. We demonstrate that echoes can be used to measure diffusion much quicker than present methods and could be useful to a variety of hadron synchrotrons.
The AFIS tree growth model for updating annual forest inventories in Minnesota
Margaret R. Holdaway
2000-01-01
As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...
NASA Astrophysics Data System (ADS)
Downey, N.; Begnaud, M. L.; Hipp, J. R.; Ballard, S.; Young, C. S.; Encarnacao, A. V.
2017-12-01
The SALSA3D global 3D velocity model of the Earth was developed to improve the accuracy and precision of seismic travel time predictions for a wide suite of regional and teleseismic phases. Recently, the global SALSA3D model was updated to include additional body wave phases including mantle phases, core phases, reflections off the core-mantle boundary and underside reflections off the surface of the Earth. We show that this update improves travel time predictions and leads directly to significant improvements in the accuracy and precision of seismic event locations as compared to locations computed using standard 1D velocity models like ak135, or 2½D models like RSTT. A key feature of our inversions is that path-specific model uncertainty of travel time predictions are calculated using the full 3D model covariance matrix computed during tomography, which results in more realistic uncertainty ellipses that directly reflect tomographic data coverage. Application of this method can also be done at a regional scale: we present a velocity model with uncertainty obtained using data obtained from the University of Utah Seismograph Stations. These results show a reduction in travel-time residuals for re-located events compared with those obtained using previously published models.
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
Wherry, Susan A.; Wood, Tamara M.; Anderson, Chauncey W.
2015-01-01
Using the extended 1991–2010 external phosphorus loading dataset, the lake TMDL model was recalibrated following the same procedures outlined in the Phase 1 review. The version of the model selected for further development incorporated an updated sediment initial condition, a numerical solution method for the chlorophyll a model, changes to light and phosphorus factors limiting algal growth, and a new pH-model regression, which removed Julian day dependence in order to avoid discontinuities in pH at year boundaries. This updated lake TMDL model was recalibrated using the extended dataset in order to compare calibration parameters to those obtained from a calibration with the original 7.5-year dataset. The resulting algal settling velocity calibrated from the extended dataset was more than twice the value calibrated with the original dataset, and, because the calibrated values of algal settling velocity and recycle rate are related (more rapid settling required more rapid recycling), the recycling rate also was larger than that determined with the original dataset. These changes in calibration parameters highlight the uncertainty in critical rates in the Upper Klamath Lake TMDL model and argue for their direct measurement in future data collection to increase confidence in the model predictions.
NASA Astrophysics Data System (ADS)
Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue
2012-10-01
The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.
NASA Astrophysics Data System (ADS)
Dill, Robert; Bergmann-Wolf, Inga; Thomas, Maik; Dobslaw, Henryk
2016-04-01
The global numerical weather prediction model routinely operated at the European Centre for Medium-Range Weather Forecasts (ECMWF) is typically updated about two times a year to incorporate the most recent improvements in the numerical scheme, the physical model or the data assimilation procedures into the system for steadily improving daily weather forecasting quality. Even though such changes frequently affect the long-term stability of meteorological quantities, data from the ECMWF deterministic model is often preferred over alternatively available atmospheric re-analyses due to both the availability of the data in near real-time and the substantially higher spatial resolution. However, global surface pressure time-series, which are crucial for the interpretation of geodetic observables, such as Earth rotation, surface deformation, and the Earth's gravity field, are in particular affected by changes in the surface orography of the model associated with every major change in horizontal resolution happened, e.g., in February 2006, January 2010, and May 2015 in case of the ECMWF operational model. In this contribution, we present an algorithm to harmonize surface pressure time-series from the operational ECMWF model by projecting them onto a time-invariant reference topography under consideration of the time-variable atmospheric density structure. The effectiveness of the method will be assessed globally in terms of pressure anomalies. In addition, we will discuss the impact of the method on predictions of crustal deformations based on ECMWF input, which have been recently made available by GFZ Potsdam.
NASA Astrophysics Data System (ADS)
Moorkamp, M.; Fishwick, S.; Jones, A. G.
2015-12-01
Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.
NRL/VOA Modifications to IONCAP as of 12 July 1988
1989-08-02
suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING
Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7
This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.
Palma, Eduardo; Salinas, Lilian; Aracena, Julio
2016-03-01
Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Single-Trial Event-Related Potential Correlates of Belief Updating
Murawski, Carsten; Bode, Stefan
2015-01-01
Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170
Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2
A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
NASA Astrophysics Data System (ADS)
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-11-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Wisconsin's forest statistics, 1987: an inventory update.
W. Brad Smith; Jerold T. Hahn
1989-01-01
The Wisconsin 1987 inventory update, derived by using tree growth models, reports 14.7 million acres of timberland, a decline of less than 1% since 1983. This bulletin presents findings from the inventory update in tables detailing timberland area, volume, and biomass.
Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Gong, Xinwei; Socolar, Joshua
2009-03-01
Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.
A trust region approach with multivariate Padé model for optimal circuit design
NASA Astrophysics Data System (ADS)
Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.
2017-11-01
Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2014-05-01
Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.
Wang, Peng; Zheng, Yefeng; John, Matthias; Comaniciu, Dorin
2012-01-01
Dynamic overlay of 3D models onto 2D X-ray images has important applications in image guided interventions. In this paper, we present a novel catheter tracking for motion compensation in the Transcatheter Aortic Valve Implantation (TAVI). To address such challenges as catheter shape and appearance changes, occlusions, and distractions from cluttered backgrounds, we present an adaptive linear discriminant learning method to build a measurement model online to distinguish catheters from background. An analytic solution is developed to effectively and efficiently update the discriminant model and to minimize the classification errors between the tracking object and backgrounds. The online learned discriminant model is further combined with an offline learned detector and robust template matching in a Bayesian tracking framework. Quantitative evaluations demonstrate the advantages of this method over current state-of-the-art tracking methods in tracking catheters for clinical applications.
Xian, George; Homer, Collin G.; Fry, Joyce
2009-01-01
The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline.
A Comparison of Combustor-Noise Models
NASA Technical Reports Server (NTRS)
Hultgren, Lennart, S.
2012-01-01
The current status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP) for current-generation (N) turbofan engines is summarized. Best methods for near-term updates are reviewed. Long-term needs and challenges for the N+1 through N+3 timeframe are discussed. This work was carried out under the NASA Fundamental Aeronautics Program, Subsonic Fixed Wing Project, Quiet Aircraft Subproject.
Updating the limit efficiency of silicon solar cells
NASA Technical Reports Server (NTRS)
Wolf, M.
1979-01-01
Evaluation of the limit efficiency based on the simplest, most basic mathematical method that is appropriate for the conditions imposed by the cell model is discussed. The methodology, the solar cell structure, and the selection of the material parameters used in the evaluation are described. The results are discussed including a set of design goals derived from the limit efficiency.
Real-time simulation of contact and cutting of heterogeneous soft-tissues.
Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian
2014-02-01
This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Y. P.; Elbern, H.; Lu, K. D.; Friese, E.; Kiendler-Scharr, A.; Mentel, Th. F.; Wang, X. S.; Wahner, A.; Zhang, Y. H.
2013-03-01
The formation of Secondary organic aerosol (SOA) was simulated with the Secondary ORGanic Aerosol Model (SORGAM) by a classical gas-particle partitioning concept, using the two-product model approach, which is widely used in chemical transport models. In this study, we extensively updated SORGAM including three major modifications: firstly, we derived temperature dependence functions of the SOA yields for aromatics and biogenic VOCs, based on recent chamber studies within a sophisticated mathematic optimization framework; secondly, we implemented the SOA formation pathways from photo oxidation (OH initiated) of isoprene; thirdly, we implemented the SOA formation channel from NO3-initiated oxidation of reactive biogenic hydrocarbons (isoprene and monoterpenes). The temperature dependence functions of the SOA yields were validated against available chamber experiments. Moreover, the whole updated SORGAM module was validated against ambient SOA observations represented by the summed oxygenated organic aerosol (OOA) concentrations abstracted from Aerosol Mass Spectrometer (AMS) measurements at a rural site near Rotterdam, the Netherlands, performed during the IMPACT campaign in May 2008. In this case, we embedded both the original and the updated SORGAM module into the EURopean Air pollution and Dispersion-Inverse Model (EURAD-IM), which showed general good agreements with the observed meteorological parameters and several secondary products such as O3, sulfate and nitrate. With the updated SORGAM module, the EURAD-IM model also captured the observed SOA concentrations reasonably well especially those during nighttime. In contrast, the EURAD-IM model before update underestimated the observations by a factor of up to 5. The large improvements of the modeled SOA concentrations by updated SORGAM were attributed to the mentioned three modifications. Embedding the temperature dependence functions of the SOA yields, including the new pathways from isoprene photo oxidations, and switching on the SOA formation from NO3 initiated biogenic VOCs oxidations contributed to this enhancement by 10%, 22% and 47%, respectively. However, the EURAD-IM model with updated SORGAM still clearly underestimated the afternoon SOA observations up to a factor of two. More work such as to improve the simulated OH concentrations under high VOCs and low NOx concentrations, further including the SOA formation from semi-volatile organic compounds, the correct aging process of aerosols, oligomerization process and the influence on the biogenic SOA by the anthropogenic SOA, are still required to fill the gap.
Crucial role of strategy updating for coexistence of strategies in interaction networks.
Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J
2015-04-01
Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.
Crucial role of strategy updating for coexistence of strategies in interaction networks
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J.
2015-04-01
Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.
NASA Astrophysics Data System (ADS)
Sharkawi, K.-H.; Abdul-Rahman, A.
2013-09-01
Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0 to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.
2017-10-24
The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.
On the computation and updating of the modified Cholesky decomposition of a covariance matrix
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.
NASA Astrophysics Data System (ADS)
Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.
2017-03-01
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Super-nodal methods for space-time kinetics
NASA Astrophysics Data System (ADS)
Mertyurek, Ugur
The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative residual in the 3-D few group diffusion equation at the fine mesh level is also introduced in this work.
Minnesota's forest statistics, 1987: an inventory update.
Jerold T. Hahn; W. Brad Smith
1987-01-01
The Minnesota 1987 inventory update, derived by using tree growth models, reports 13.5 million acres of timberland, a decline of less than 1% since 1977. This bulletin presents findings from the inventory update in tables detailing timer land area, volume, and biomass.
Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing
NASA Astrophysics Data System (ADS)
Krajíček, Jiří
This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun
2017-05-01
Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.
NASA Astrophysics Data System (ADS)
Zhang, Wancheng; Xu, Yejun; Wang, Huimin
2016-01-01
The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.
Lee, E.A.; Kish, J.L.; Zimmerman, L.R.; Thurman, E.
2001-01-01
An analytical method using high-performance liquid chromatography/mass spectrometry (HPLC/MS) was developed by the U.S. Geological Survey in 1999 for the analysis of selected chloroacetanilide herbicide degradation compounds in water. These compounds were acetochlor ethane sulfonic acid (ESA), acetochlor oxanilic acid (OXA), alachlor ESA, alachlor OXA, metolachlor ESA, and metolachlor OXA. The HPLC/MS method was updated in 2000, and the method detection limits were modified accordingly. Four other degradation compounds also were added to the list of compounds that can be analyzed using HPLC/MS; these compounds were dimethenamid ESA, dimethenamid OXA, flufenacet ESA, and flufenacet OXA. Except for flufenacet OXA, good precision and accuracy were demonstrated for the updated HPLC/MS method in buffered reagent water, surface water, and ground water. The mean HPLC/MS recoveries of the degradation compounds from water samples spiked at 0.20 and 1.0 ?g/L (microgram per liter) ranged from 75 to 114 percent, with relative standard deviations of 15.8 percent or less for all compounds except flufenacet OXA, which had relative standard deviations ranging from 11.3 to 48.9 percent. Method detection levels (MDL's) using the updated HPLC/MS method varied from 0.009 to 0.045 ?g/L, with the flufenacet OXA MDL at 0.072 ?g/L. The updated HPLC/MS method is valuable for acquiring information about the fate and transport of the parent chloroacetanilide herbicides in water.
2018-01-01
Each year, mycotoxins cause economic losses of several billion US dollars worldwide. Consequently, methods must be developed, for producers and cereal manufacturers, to detect these toxins and to comply with regulations. Chromatographic reference methods are time consuming and costly. Thus, alternative methods such as infrared spectroscopy are being increasingly developed to provide simple, rapid, and nondestructive methods to detect mycotoxins. This article reviews research conducted over the last eight years into the use of near-infrared and mid-infrared spectroscopy to monitor mycotoxins in corn, wheat, and barley. More specifically, we focus on the Fusarium species and on the main fusariotoxins of deoxynivalenol, zearalenone, and fumonisin B1 and B2. Quantification models are insufficiently precise to satisfy the legal requirements. Sorting models with cutoff levels are the most promising applications. PMID:29320435
Levasseur-Garcia, Cecile
2018-01-10
Each year, mycotoxins cause economic losses of several billion US dollars worldwide. Consequently, methods must be developed, for producers and cereal manufacturers, to detect these toxins and to comply with regulations. Chromatographic reference methods are time consuming and costly. Thus, alternative methods such as infrared spectroscopy are being increasingly developed to provide simple, rapid, and nondestructive methods to detect mycotoxins. This article reviews research conducted over the last eight years into the use of near-infrared and mid-infrared spectroscopy to monitor mycotoxins in corn, wheat, and barley. More specifically, we focus on the Fusarium species and on the main fusariotoxins of deoxynivalenol, zearalenone, and fumonisin B1 and B2. Quantification models are insufficiently precise to satisfy the legal requirements. Sorting models with cutoff levels are the most promising applications.
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... projections models, as well as changes to future vehicle mix assumptions, that influence the emission... methodology that may occur in the future such as updated socioeconomic data, new models, and other factors... updated mobile emissions model, the Motor Vehicle Emissions Simulator (also known as MOVES2010a), and to...
Separate encoding of model-based and model-free valuations in the human brain.
Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter
2011-10-01
Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.
Summary Analysis: Hanford Site Composite Analysis Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, W. E.; Lehman, L. L.
2017-06-05
The Hanford Site’s currently maintained Composite Analysis, originally completed in 1998, requires an update. A previous update effort was undertaken by the U.S. Department of Energy (DOE) in 2001-2005, but was ended before completion to allow the Tank Closure & Waste Management Environmental Impact Statement (TC&WM EIS) (DOE/EIS-0391) to be prepared without potential for conflicting sitewide models. This EIS was issued in 2012, and the deferral was ended with guidance in memorandum “Modeling to Support Regulatory Decision Making at Hanford” (Williams, 2012) provided with the aim of ensuring subsequent modeling is consistent with the EIS.
Updated Panel-Method Computer Program
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1995-01-01
Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.
Updating categorical soil maps using limited survey data by Bayesian Markov chain cosimulation.
Li, Weidong; Zhang, Chuanrong; Dey, Dipak K; Willig, Michael R
2013-01-01
Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data.
Updating Categorical Soil Maps Using Limited Survey Data by Bayesian Markov Chain Cosimulation
Dey, Dipak K.; Willig, Michael R.
2013-01-01
Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data. PMID:24027447
Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati
2016-10-01
The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.
A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome
Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.
2014-01-01
This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416
1999 update of the Arizona highway cost allocation study
DOT National Transportation Integrated Search
1999-08-01
The purpose of this report was to update the Arizona highway cost allocation study and to evaluate the alternative of using the new FHWA cost allocation model as a replacement The update revealed that the repeal of Arizona's weight-distance tas has l...
NASA Astrophysics Data System (ADS)
Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han
2017-11-01
Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.
Forecasting the (un)productivity of the 2014 M 6.0 South Napa aftershock sequence
Llenos, Andrea L.; Michael, Andrew J.
2017-01-01
The 24 August 2014 Mw 6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude. In the first 4.5 days, only 59 M≥1.8 aftershocks occurred, the largest of which was an M 3.9 that happened a little over two days after the mainshock. We investigate the aftershock productivity of the South Napa sequence and compare it with other M≥5.5 California strike‐slip mainshock–aftershock sequences. While the productivity of the South Napa sequence is among the lowest, northern California mainshocks generally have fewer aftershocks than mainshocks further south, although the productivities vary widely in both regions. An epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988) fit to Napa seismicity from 1980 to 23 August 2014 fits the sequence well and suggests that low‐productivity sequences are typical of this area. Utilizing regional variations in productivity could improve operational earthquake forecasting (OEF) by improving the model used immediately after the mainshock. We show this by comparing the daily rate of M≥2 aftershocks to forecasts made with the generic California model (Reasenberg and Jones, 1989; hereafter, RJ89), RJ89 models with productivity updated daily, a generic California ETAS model, an ETAS model based on premainshock seismicity, and ETAS models updated daily following the mainshock. RJ89 models for which only the productivity is updated provide better forecasts than the generic RJ89 California model, and the Napa‐specific ETAS models forecast the aftershock rates more accurately than either generic model. Therefore, forecasts that use localized initial parameters and that rapidly update the productivity may be better for OEF than using a generic model and/or updating all parameters.
Xu, Xiangtao; Medvigy, David; Powers, Jennifer S; Becknell, Justin M; Guan, Kaiyu
2016-10-01
We assessed whether diversity in plant hydraulic traits can explain the observed diversity in plant responses to water stress in seasonally dry tropical forests (SDTFs). The Ecosystem Demography model 2 (ED2) was updated with a trait-driven mechanistic plant hydraulic module, as well as novel drought-phenology and plant water stress schemes. Four plant functional types were parameterized on the basis of meta-analysis of plant hydraulic traits. Simulations from both the original and the updated ED2 were evaluated against 5 yr of field data from a Costa Rican SDTF site and remote-sensing data over Central America. The updated model generated realistic plant hydraulic dynamics, such as leaf water potential and stem sap flow. Compared with the original ED2, predictions from our novel trait-driven model matched better with observed growth, phenology and their variations among functional groups. Most notably, the original ED2 produced unrealistically small leaf area index (LAI) and underestimated cumulative leaf litter. Both of these biases were corrected by the updated model. The updated model was also better able to simulate spatial patterns of LAI dynamics in Central America. Plant hydraulic traits are intercorrelated in SDTFs. Mechanistic incorporation of plant hydraulic traits is necessary for the simulation of spatiotemporal patterns of vegetation dynamics in SDTFs in vegetation models. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Işık, Şahin; Özkan, Kemal; Günal, Serkan; Gerek, Ömer Nezih
2018-03-01
Change detection with background subtraction process remains to be an unresolved issue and attracts research interest due to challenges encountered on static and dynamic scenes. The key challenge is about how to update dynamically changing backgrounds from frames with an adaptive and self-regulated feedback mechanism. In order to achieve this, we present an effective change detection algorithm for pixelwise changes. A sliding window approach combined with dynamic control of update parameters is introduced for updating background frames, which we called sliding window-based change detection. Comprehensive experiments on related test videos show that the integrated algorithm yields good objective and subjective performance by overcoming illumination variations, camera jitters, and intermittent object motions. It is argued that the obtained method makes a fair alternative in most types of foreground extraction scenarios; unlike case-specific methods, which normally fail for their nonconsidered scenarios.